Index optimization output

I have a database running on 11.1.0.7 on a Linux 64 bit machine.I want to know the effect of doing an index optimization through these 2 queries.
Will it help in improving the performance if the fragmentation is more than 20%? Any help would be really appreaciated!
begin
ctx_output.start_log('opt_rebuild_TEXT2');
ctx_output.add_event(ctx_output.event_opt_print_token);
ctx_ddl.optimize_index(idx_name=>'TEXT2',optlevel=>'REBUILD');
ctx_output.end_log;
end;
begin
ctx_output.start_log('opt_rebuild_TEXT1');
ctx_output.add_event(ctx_output.event_opt_print_token);
ctx_ddl.optimize_index(idx_name=>'TEXT1',optlevel=>'REBUILD');
ctx_output.end_log;
end;
Thanks in advance.

Hi,
as the manual states for this procedure: "Optimizing an index removes old data and minimizes index fragmentation, which can improve query response time". So yes it can improve, but if the fragmention is on words you never use, then it won't help, if the fragmentation is on words you frequently use, then it will certainly help. It compacts the index, so it can quicker find the corresponding rows.
You can always test yourself of course. Test a query, wait for a 20% fragmentation, test the query again, optimize and test again. If the last query time is better than the 2nd one, you know it helps.
Herald ten Dam
http://htendam.wordpress.com

Similar Messages

  • Context Index Optimization

    hi ,
    This is w.r.to the "maxtime" parameter used in the Context Index Optimization.
    As per optimization norms Oracle should optimize the index until the number of minutes specified in the "maxtime" parameter.
    For e:g
    ctx_ddl.optimize_index('item_ctxdesc', 'FULL', 30);
    Here Oracle should optimize the index for 30 minutes as specified above. But it runs for arround 4 hours and sometimes 5 hours.
    Could anyone please clarify or explain on this ?
    thanx and regards
    Naresh
    Oracle DBA

    This is certification forum. Post your query under:
    Database - General
    General Database Discussions

  • How to stop Index Optimization in Ultraseach?

    Portal Version: 9.0.2.2.14
    RDBMS Versjion: 9.0.1.3
    OS/Vers. Where Portal is Installed:: SPARC Solaris 8 64bit
    How to stop process of optimization of indexes in Ultraseach?

    First, back up all data immediately, as your boot drive may be failing.
    If you have more than ten or so files or folders on your Desktop, move them, temporarily at least, somewhere else in your home folder.
    If iCloud is enabled, disable it.
    Disconnect all wired peripherals except keyboard, mouse, and monitor, if applicable. Launch the usual set of applications you use when you notice the problem.
    Step 1
    Launch the Activity Monitor application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ If you’re running Mac OS X 10.7 or later, open LaunchPad. Click Utilities, then Activity Monitor in the page that opens.
    Select the CPU tab.
    Select All Processes from the menu in the toolbar, if not already selected.
    Click the heading of the % CPU column in the process table to sort the entries by CPU usage. You may have to click it twice to get the highest value at the top. What is it, and what is the process? Also post the values for % User, % System, and % Idle at the bottom of the window.
    Select the System Memory tab. What values are shown in the bottom part of the window for Page outs and Swap used?
    Step 2
    You must be logged in as an administrator to carry out this step.
    Launch the Console application in the same way as above. Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left.
    Post the 50 or so most recent messages in the log — the text, please, not a screenshot.
    Important: Some personal information, such as your name, may appear in the log. Edit it out before posting.

  • Index the output of a stacked sequence

    Hi everyone,
    I'm writing some code in which I instantiate many objects when I first start my code and place the objects into an array that is used as a lookup table later on in operation.  This takes up a lot of space on the block diagram and makes it difficult to understand for the user that is not familiar with the code.  Is it possible to create something like a flat sequence that can have a single indexed output tunnel?  I want something like what you would get when having a tunnel out of a for loop I just want each itteration to be unique (unlike the for loop).  I've included an image of what my diagram currently looks like for reference.  You can see that I create all of the objects with unique information (ID, control reference, etc.) and then build an array with them.  I would like to instantiate each of these objects in a frame of a stacked sequence with a single indexed output (if possible). 
    Thanks for the advice!
    -Eric
    Solved!
    Go to Solution.

    Option 1: use a subVI
    Option 2: Use a FOR loop as a sequencer.  Instead of the stacked (or flat) sequence structure, put a case structure inside of a FOR loop and wire the i to the case selector.  Then in each case you can initialize a different class.  The output can then be autoindexed.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Suggestion: Oracle text CONTEXT index on one or more columns ?

    Hi,
    I'm implementing Oracle text using CONTEXT ..... and would like to ask you for performance suggestion ...
    I have a table of Articles .... with columns .. TITLE, SUBTITLE , BODY ...
    Now is it better from performance point of view to move all three columns into one dummy column ... with name like FULLTEXT ... and put index on this single column,
    and then use CONTAINS(FULLTEXT,'...')>0
    Or is it almost the same for oracle if i put indexes on all three columns and then call:
    CONTAINS(TITLE,'...')>0 OR CONTAINS(SUBTITLE,'...')>0 OR CONTAINS(BODY,'...')>0
    I actually don't care if the result is a match in TITLE OR SUBTITLE OR BODY ....
    So if i move into some FULLTEXT column, then i have duplicate data in a article row ... but if i create indexes for each column, than oracle has 2x more to index,optimize and search ... am I wright ?
    Table has 1.8mil records ...
    Thank you.
    Kris

    mackrispi wrote:
    Now is it better from performance point of view to move all three columns into one dummy column ... with name like FULLTEXT ... and put index on this single column,
    and then use CONTAINS(FULLTEXT,'...')>0What version of Oracle are you on? If 11 then you could use a virtual column to do this, otherwise you'd have to write code to maintain the column which can get messy.
    mackrispi wrote:
    Or is it almost the same for oracle if i put indexes on all three columns and then call:
    CONTAINS(TITLE,'...')>0 OR CONTAINS(SUBTITLE,'...')>0 OR CONTAINS(BODY,'...')>0Benchmark it and find out :)
    Another option would be something like this.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9455353124561
    Were i you, i would try out those 3 approaches and see which meet your performance requirements and weigh that with the ease of implementation and administration.

  • Creating the Equivalent of a Single Axis Stepper Motor Indexer in a cRIO

    I am looking for some FPGA code that implements a complete (or near complete) Single Axis Stepper Motor Indexer function on a cRIO using an NI-9474 DO module. 
    For those that aren't familiar with the term "indexer", an indexer is a pulse generation subsystem that provides pulses to  a stepper motor driver that in turn drives the windings on a stepper motor. Some vendors provide a combination indexer/driver units that are issued motion commands via a number of communications standards.
    The FPGA indexer should:
    Output pulses in the 1Hz-800KHz range.
    Run the stepper at constant velocity continuously or for a fixed number of steps.
    Implement an acceleration/deceleration ramp when changing velocity.
    Implement clock-wise and counter-clock-wise limit switches.
    I  have some demo code provided from NI that proves to me that an FPGA indexer is feasible, I am looking for some examples of a more complete indexer.

    I am not aware of any examples of this being implemented.  You mentioned that you have some demo code.  Where did you get this, and if it wasn't from the website, could you post it?  It is very possible that you can build off of this example to get the behavior you'd like.
    Regards,
    Burt S

  • INTERMEDIA TEXT INDEX를 사용하는 QUERY의 TUNING

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-12
    INTERMEDIA TEXT INDEX를 사용하는 QUERY의 TUNING
    ===============================================
    Purpose
    Intermedia text를 사용하는 query의 속도를 향상시킬 수 있는 방안을
    알아보자.
    Explanation
    1. Make analyze all the table in the query
    text index를 이용하는 Query 안의 모든 Table을 analyze 해 주십시요.
    예를 들어 다음의 command를 이용할 수 있습니다.
    ANALYZE TABLE <table_name> COMPUTE STATISTICS;
    or
    ANALYZE TABLE <table_name> ESTIMATE STATISTICS 1000 ROWS;
    or
    ANALYZE TABLE <table_name> ESTIMATE STATISTICS 50 PERCENT;
    2. Using FIRST_ROWS hint
    더 좋은 response time 을 위해서 first_rows hint 를 사용해 보십시요.
    database에 기본적으로 설정된 optimizer mode는 choose mode입니다.
    이것은 전체 처리시간(throughput)을 가장 빠르게 하기 위한(all_rows mode)
    plan을 따르기 때문에 user의 입장에서는 first_rows 보다는 늦다고 느낄 수
    있습니다.
    Query에 다음과 같이 hint를 주고 performance를 확인해 보십시요.
    select /*+ FIRST_ROWS */ pk, col from ctx_tab
    where contains(txt_col, 'test', 1) > 0;
    단, first_rows hint를 이용하는 경우 자동으로 score 순서대로
    ordering 되지 않습니다. 왜냐하면 단어에 부합하는 문서를 찾는대로
    즉시 결과로 나타내 주기 때문입니다.
    3. Make sure text index is not fragmented
    insert, delete 가 많이 되는 table의 경우 index fragment를 제거해 주어야
    합니다. Index fragmentation 은 다음과 같이 확인할 수 있습니다.
    select count(*) from dr$<indexname>$i; -> A
    select count(*) from (select distinct(token_text) from dr$<indexname>$i); -> B
    위의 결과가 A/B 의 값이 3:1 보다 크면 optimize_index 를 실행해 주시는
    것이 좋습니다. 다음과 같은 command로 index optimization을 할 수 있습니다.
    alter index <index_name> rebuild online
    parameters('optimize full unlimited');
    index rebuild중에 online option을 주면 rebuild하는 중에도 계속 index를
    사용할 수 있게 됩니다. 하지만, 가능하면 사용자가 없을 때 rebuild하는 것이
    좋습니다.
    4. Check execution plan and sql trace.
    기본적인 여러 가지 작업들에도 속도가 별로 향상되지 않는다면, 직접
    sql trace를 떠서 Execution plan등을 확인해 보는 것이 필요합니다.
    예를 들어 SQL*PLUS에서 다음과 같이 sql trace를 뜹니다.
    alter session set timed_statistics=true;
    alter session set sql_trace=true;
    select ..... -> execute the query
    실행 후,
    exit
    user_dump_dest 에 지정된 directory에 trace 가 떨어지면 다음과 같은
    command로 tkprof 를 떠서 내용을 확인합니다.
    $ tkprof <tracefilename> <outputfilename> explain=username/password
    Referenc Documents
    Bulletin#10134 : SQL trace와 tkprof 사용 방법

  • Using hint when creating index

    Hi,
    is it possible to use a hint when creating an index.
    for example normally when creating an index, optimizer does full table scan, but i want it to use a different index to walk through the table.
    any ideas?
    technically impossible?

    If the new index is a subset of an existing index, Oracle should be able to read the existing index to create the new index.
    However, if the index consists of such a combination of columns that one or more is not in the subset then it has to read all the rows of the table.
    Hemant K Chitale

  • RoboHelp Output

    I'm not sure if my posts are getting to the right people, but I'll try again.
    Since .chm files cannot be accessed via a network, it's a major problem for us since we cannot "install" these help file(s) on our clients' machines.  What information we have found on the forums and Microsoft for any kind of workaround is unfortunately, unsuccessful.
    So, we attempted to use WebHelp Pro, but we discovered that the .htm files do not work on Delphi, which is what our desktop applications are written in.
    Soooo, does anyone have any suggestions as to what to use as the primary layout?  Are there any possilbe "hidden" ways to make the .chm file work on a network?  What about Robo Server?

    Hi Rick,
    Well, I've been told that we should stick to WebHelp, so we're not using RoboHelp server.  Even so, we can't seem to get our desktop application (Delphi) to start up the index.htm output file.  When we press F1 or click our Help button, nothing happens.
    My developer then tried something, and now the files open through the PC's Internet Explorer, which is NOT what we want either.
    Any other suggestions?
    - Karen

  • Using hint index

    Hi,
    At use of hint /*+INDEX */,
    optimizer use costbased approach for all session and not just for the given query.
    Is it possible to avoid it.
    Thank.

    Bumping up to see if anyone has a definitive answer on this topic.
    The sample create statement is as follows:
    CREATE OR REPLACE VIEW VW_BOOLA
    AS
    SELECT /*+ INDEX(BOOLA, INDXBOOLA_CTX_ID) */
    ID, DESCRIPTION
    FROM BOOLA;
    If I create a view using my CTX index as a HINT, will it limit the records retrieved to only those that have been properly indexed by INDXBOOLA_CTX_ID which runs every hour?
    Thanks again.

  • String array indexing

    Hello there,
    Here's something i need help with.
    I'm having a 2D string out of which I extract one colum as an array(1D) and  i'm matching for a regular expression. For every match in this 1D array I need the element index to output the corresponding row in which the element is present in the 2D array.
    Here's someting about the matching requirement:
    The matching process should be such that, :
    If i typed in , "d", it should give out all the element indexes starting from letter "d"
    Thanks
    Shaun

    pk120 wrote:
    I'm having a 2D string out of which I extract one colum as an array(1D) and  i'm matching for a regular expression. For every match in this 1D array I need the element index to output the corresponding row in which the element is present in the 2D array.
    Please clarify your requirements. Do you want the row index only for elements in the column where the match occurred or do you want all 2D indices that match in the entire array?
    It is always best if you could attach a simple VI containing your 2D array e.g. as diagram constant or as default values in a control. Also create an indicator that contains the expected result as default value.
    LabVIEW Champion . Do more with less code and in less time .

  • Errorcode 2030 while creating Index

    Hi,
    I am try to create an index for the first time and am not able to, because of the following error:
    Index could not be created; creating index failed: general configuration error (Errorcode 2030)
    Can you tell me if there is any place where I can lookup errorcodes and their descriptions?
    Thanks
    Raj Balakrishnan

    Error codes and messages of TREX 6.0
    OSS Note 649574 you can find a attach file in which explain you these message error.
    Patricio.
    List of TREX 6.0 Error Codes and Messages
    (including hints for solving some common problems)
    0000 no error No errors occurred. The process was completed successfully
    0001 multiple operations failed detail provided in a list This problem is mostly caused by HTTP problems when indexing or synchronizing. You must check whether the documents processed are actually available
    0004 Document to delete/update was not found The document that was prepared for deindexing or an incorrect document ID or document name. You must check whether there is a document in the index or on the harddisk (as URL) with the corresponding ID. For non Portal scenarios this error means: "document can not be found". If you use Content Server please check this document for consistency.
    0008 index is corrupted. Must be deleted/repaired by hand An error occurred when the system tried to delete the index. The corresponding entry was deleted from bartho.ini, but the index still exists. Solution: 1. Stop the TREXServices. 2. Delete the index directory and the entries in bartho.ini, TREXIndexServer.ini and TREXTcpipClient.ini manually. 3. Start the TREXServices 4. if necessary delete index in R/3 system (for non Portal scenarios) and clean tables. 
    0010 not enough space available for index creation There is no disk space available for creating a new index. You must chose another disk for the index. (The trace files might be occupying to much diskspace. If so, they have to be deleted.) 
    0011 specified language is unknown or not supported You must check whether the chosen language is supported by TREX and whether the language ID has been entered correctly. You must check also if these languages have been chosen correctly during the TREX installation. 
    0012 directory creation failed An index with the same name has already been created, and an error occurred when the index with the same ID was created again. The old index was not correct deleted :empty index directory still exists. Solution: 1. Stop the TREX Services. 2. Delete the index directory manually. 3. Start TREX Services again. 
    0013 directory removal failed The index directory could not be deleted correctly. This may be because there is a file missing from the index directory, or the index might be locked by another process. Solution: 1. Stop the TREX Services. 2. Delete the index directory and the entry in bartho.ini, in TREXIndexServer.ini and in TREXTcpipClient.ini manually. 3. Start the TREX Services 4. if necessary delete index in R/3 system (for non Portal scenarios) and clean tables. 
    0014 unknown or unsupported data type This may be because the document contains a data type that is not supported.
    0016 an operation has failed on a file (create, delete, copy, move...) This error occurs when the system tries to delete an index that is locked by another process. Solution: 1. Stop the TREX Services. 2. Delete the index directory and the entry in bartho.ini,TREXIndexServer.ini and in TREXTcpipClient.ini manually. 3. Start the TREX Services 
    0018 a received argument has an invalid value This problem can be occurred for example if Search Request is not correct.
    0020 unspecified win32/unix system error The error occurs when the system tries to index an index that has already been partially deleted. Solution: 1. Stop the TREX Services. 2. Delete the index directory and the entry in bartho.ini,TREXIndexServer.ini and in TREXTcpipClient.ini manually. 3. Start the TREX Services 
    0022 automatic index optimization failed This error can occur when another document is indexed by a URL in an index for an existing document. Afterwards the index is no longer available. Solution: 1. Stop the TREX Services. 2. Delete the index directory and the entry in bartho.ini,TREXIndexServer.ini and in TREXTcpipClient.ini manually. 3. Start the TREX Services 
    2001 Multiple operation failed, detail is provided in a list This problem is mostly caused by HTTP problems when indexing or synchronizing. You must check whether the documents processed are actually available. 
    2003  The error normally occurs when you try to create a new index with the same index ID as that of an existing index. If the index is not available in SAP_RETRIEVAL_PATH \Index, you must check whether there is an entry for a corresponding index in bartho.ini, TREXIndexServer.ini 
    2004 Document to delete/update was not found The document that was prepared for deindexing or an incorrect document ID or document name. You must check whether there is a document in the index or on the harddisk (as URL) with the corresponding ID. For non Portal scenarios this error means: "document can not be found". If you use Content Server please check this document for consistency.
    2007 index does not exist This error occurs when you try to delete an index that does not exist (or (de-) index some document in this index). The following points must be checked: Is the name of the index correct? Is there a subdirectory with the same name in SAP_RETRIEVAL_PATH\Index? Is there a matching entry in bartho.ini, TREXIndexserver.ini and in TREXTcpipClient.ini? 
    2011 Document language is unknown or not specified Check whether the chosen language is supported by TREX and whether the language ID has been entered correctly. (You also need to check that the language in question was selected when TREX was installed). 
    2503 Direct load, no queue server existing It should be checked , if the TREXIndexServer.ini has the proper entry for the Index Server, Queue Server and the IndexId. If it is necessary, this entries have to be added manually or from the backup version. 
    2960 The index server (for the specified indexes) cannot be determined check if TREXNameServer should be used, properly configured and run.
    2982 A communication error occured, with the TREX TcpIp Server Check if TREXIndexServer is running.
    2988 Missing argument Possible reason: search query is not correct
    2990 codepage error Possible reason: Codepage is not supported. 
    2998 dll can not be loaded Possible reason: in R/3 system check if in Transaction SRMO during the creation of SSR was selected DRFUZZY as searchengine. 
    6001 QS error: index not found Possible reasons: Index was deleted not complete (queue is still exist), or queue was created from TREXQueueClient. Solution: delete Queue through TREXQueueClient 
    6002 QS error: queue not found The queue has been deleted manually or was never created.
    6009 QS error: no connection to QueueServer The queue server did not start. Also check whether the TREXIndexServer.ini file has appropriate entries for the index server, queue server, and index id. If necessary, enhanced these entries manually or using the back-up version. 
    6017 QS error: queue not deleted The queue can not be deleted. The TREX index server, preprocessor, and queue server must be stopped and the queue index must then be deleted manually. 
    6021 QS error: queue exists The queue wasn't deleted when you deleted the index. This error occurred when you tried to create an index with the same ID. You have to delete the entries from this index from TREXTcpipClient.ini. 
    6401 HTTP Status Code 401 : Unauthorized Possible reason: TREX-user has no permissions for this document
    6403 HTTP Status Code 403 : Forbidden Possible reason: TREX-user has no permissions for this document
    6404 HTTP Status Code 404 : Not Found Possible reason: the document is not available
    6806 Preprocessor: filter error check if the document has correct mime-type and if necessary test with the filter.exe
    6906 Attribute engine index not found Subfolder attribute for index was not created with the index. Solution: 1. Stop the TREXServices. 2. Delete the index directory and the entries in bartho.ini, TREXIndexServer.ini and TREXTcpipClient.ini manually. 3. Start the TREXServices 4. if necessary delete index in R/3 system (for non Portal scenarios) and clean tables. 
    6915 UNKNOWN_ATTRIBUTE (AttributeEngine) check the search query. Requested attribute is not available in the index.
    Message was edited by: Patricio Garcia

  • TNS: getServer failed for index

    Hi everyone,
       i have installed TREX 7 & have created 1 index... it works fine & i am able to search files from EP.....
       but i created a identical index using the same CM (Different Folder of course)... but it doesn't get index... the index tries to Optimize the index & fails.... with "Index Optimization failed" status in the queue...
    in the trace file i found "TNS: getServer failed for index xxxxx" repeatedly......
    also i noticed that if i give a folder with a small number of files... the same index works fine.. but if i give a folder with around 100MB, then the indexing goes into Optimizing & fails.....
    can anyone please help....

    It was a migration from one machine to another, where a TREX was already running. Then TREX was upgraded from
    SP 9 to SP 16.
    Portal and KM has been upgraded as well from SP 9 to SP 16.
    Couldn't it be a TREX misconfiguration due to the copy of TREX configuration files from one machine to another?
    Davide

  • How to check the verity version in our PeopleSoft Installation?

    How to check the verity version in our PeopleSoft Installation? I am not sure if the verity is installed or not and also if installed what is the version?

    yes. it says the version is 5.0.1
    Is there any difference in installation or configuration when the app and web server are in same machine and when the app and web server are installed in different servers?
    ============================================
    D:\fs840\webserv\peoplesoft>mkvdk
    mkvdk - Verity, Inc. Version 5.0.1 (_nti40, Jul 23 2004)
    Usage: mkvdk [<option>...] <filespec>...
    Where <option> can be a VDK switch, or any of:
    -about Show the collection's about resources
    -autodel Delete bulk insert file when no longer needed
    -backup <dir> Specify collection backup location
    -bulk Submit bulk insert file(s)
    -charmap <name> Specify the character map to VDK
    -collection <path> Specify the collection (required)
    -create Create the collection
    -credentials <user> Specify user[:passwd][:domain][:mailbox]
    -datapath <path> Specify VDK datapath
    -datefmt <fmt> Specify date format to VDK
    -debug Enable debugging output
    -delete Delete documents
    -description <desc> Set the collection's description
    -diskcache <num> Set VDK's disk cache size (kbytes)
    -extract Extract field values from text
    -help Print this usage information
    -insert Insert documents (default)
    -locale <locale> Specify the locale to VDK
    -logfile <file> Save output in a log file
    -loglevel <num> Set the VDK output level for the log
    -mailboxes This option is depracated. Use the credentials option inste
    ad
    -maxfiles <num> Set VDK's maximum number of open files
    -maxmemory <num> Set VDK's maximum memory usage (kbytes)
    -mode <mode> Set the indexing mode
    -modify Modify fields using field/value pairs from a bulkfile
    -nohousekeep Disable housekeeping
    -noindex Disable indexing
    -nolock Turns off locking (dangerous)
    -nooptimize Disable optimizations
    -nosave Don't save collection work list
    -noservice Prevents servicing of submitted work
    -nosubmit Don't submit work to VDK
    -numdocs <num> Number of documents to insert from bulk insert file(s)
    -numpages <num> Synonym for diskcache for backward compatibility
    -offset <num> Specify offset into bulk insert file(s)
    -online Flag for online Bulk Modify
    -optimize <spec> Optimize the collection
    -outlevel <num> Set the VDK output level
    -persist Service the collection forever
    -purge Remove all documents from collection
    -purgeback Purge in the background
    -purgewait <secs> Specify delay before purge
    -quiet Suppress all non-error messages
    -repair Repair the collection
    -servlev <spec> Advanced option for overriding service level
    -sleeptime <secs> Interval between service calls for persist
    -style <dir> Specify style directory for create
    -submit Synonym for noservice for backward compatibility
    -synch Perform work synchronously
    -topicset <path> Specify VDK topic set
    -update Update documents
    -vdkhome <path> Specify VDK home
    -verbose Output more information
    -words Build word assist list
    -wordindex Build word assist index
    The <spec> for -optimize is a hyphenated string of:
    maxmerge Perform maximal merging of partitions
    squeeze Recover space from deleted documents
    vdbopt Build optimized VDB's
    spanword Create word list spanning all partitions
    ngramindex Create ngram index into spanning word list
    maxclean Really clean (not for read-write)
    readonly Make the collection read-only
    tuneup Fully optimize for read-write use
    publish Fully optimize for read-only use
    The <spec> for -servlev is a hyphenated string of:
    search Enable search and retrieval
    insert Enable adding and updating documents
    optimize Enable opportunistic collection optimization
    assist Enable building of word list
    housekeep Enable housekeeping of unneeded files
    delete Enable document deletion
    backup Enable backup
    purge Enable background purging
    repair Enable collection repair
    dataprep Same as search-index-optimize-assist-housekeep
    index Same as insert-delete
    Error: must specify collection
    mkvdk done
    D:\fs840\webserv\peoplesoft>

  • Sql tuning query running slow -2

    oracle : 10g
    os : linux
    SELECT *
      FROM (SELECT elance_paginated_data.*, ROWNUM elance_current_row_number
              FROM (SELECT elance_original_data.*
                      FROM (SELECT   /*+first_rows ordered use_nl(cont) use_nl(conthub) use_nl(sps) */
                                     cont.ID AS workorderid,
                                     cont.status_id AS statusid,
                                     cont.code AS workordercode,
                                     cont.NAME AS workordertitle,
                                     cont.description AS workorderdescription,
                                     cont.start_dt AS startdate,
                                     cont.end_dt AS enddate,
                                     cont.termination_dt AS terminationdate,
                                     cont.user_id AS hiring_manager_user_id,
                                     cont.org_id AS org_id,
                                     conthub.user_id AS contractorid,
                                --     cand.last_name AS last_name,
                                        cand.last_name
                                     || ',  '
                                     || cand.first_name AS candidatename,
                                     cand.became_userid AS became_userid,
                                     su.display_name AS hiring_manager_name,
                                     ord.cat_id AS cat_id, ord.ID AS order_id,
                                     ord.code AS order_code,
                                     ord.NAME AS order_name,
                                     ord.version_number AS version_number
                                FROM spm_contracts cont,
                                     spm_contracts_hub conthub,
                                     spm_candidates cand,
                                     spm_users su,
                                     spm_pmt_sched_hub spsh,
                                     spm_payment_schedule sps,
                                     spm_contracts ord
                               WHERE cont.owner_id = 4000 /* Change for GE env. : changed value from 100 to 4000 */
                                 AND cont.contract_type_id = 323
                                 AND cont.status_id IN (8705, 8709, 8708, 8702)
                                 AND EXISTS (
                                        SELECT 1
                                          FROM spm_user_folder_details ufd
                                         WHERE ufd.contract_id = cont.ID
                                           AND ufd.user_id IN (
                                                  SELECT ru.role_id
                                                    FROM spm_role_users ru,
                                                         spm_roles r
                                                   WHERE ru.user_id = 29 /* Change for GE env. : changed value from 257698 to 29 */
                                                     AND ru.role_id = r.ID
                                                     AND r.type_of =
                                                                 'ROLE_GROUP_TYPE'
                                                  UNION ALL
                                                  SELECT 29 /* Change for GE env. : changed value from 257698 to 29 */
                                                    FROM DUAL))
                                 AND NOT EXISTS (
                                        SELECT /*+ use_nl(pcr) */ 1
                                          FROM spm_contract_pcr pcr,
                                               spm_contracts ord2
                                         WHERE pcr.contract_id_child = ord.ID
                                           AND pcr.contract_id_parent = ord2.ID
                                           AND ord2.status_id = 2612
                                           AND pcr.type_of_relation IN
                                                  ('CONTR_NEW_VERSION',
                                                   'CONTRACTOR_REENGAGED'
                                 AND cont.ID = conthub.contract_id
                                 AND conthub.type_of = 'CANDIDATE'
                                 AND conthub.candidate_id = cand.ID
                                 AND cont.user_id = su.ID
                                 AND cont.ID = spsh.contract_id
                                 AND spsh.pmt_sched_id = sps.ID
                                 AND sps.contract_id = ord.ID
                            ORDER BY workorderid DESC) elance_original_data) elance_paginated_data
             WHERE ROWNUM <= 31)
    WHERE elance_current_row_number >= 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.60       0.57          1        148          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        4     12.70      21.37       3459     481585          0          31
    total        6     13.30      21.94       3460     481733          0          31
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 21  (TLADM01)
    Rows     Row Source Operation
         31  VIEW  (cr=481585 pr=3459 pw=0 time=21365544 us)
         31   COUNT STOPKEY (cr=481585 pr=3459 pw=0 time=21365480 us)
         31    FILTER  (cr=481585 pr=3459 pw=0 time=21365455 us)
         31     VIEW  (cr=481491 pr=3459 pw=0 time=21366696 us)
         31      SORT ORDER BY (cr=481491 pr=3459 pw=0 time=21366310 us)
      32745       NESTED LOOPS  (cr=481491 pr=3459 pw=0 time=19266638 us)
      32745        NESTED LOOPS  (cr=415999 pr=3459 pw=0 time=18415173 us)
      32745         HASH JOIN  (cr=350507 pr=2907 pw=0 time=16695622 us)
      32745          HASH JOIN  (cr=349756 pr=2823 pw=0 time=16056095 us)
      32745           HASH JOIN  (cr=349581 pr=2780 pw=0 time=16950617 us)
      32745            TABLE ACCESS BY INDEX ROWID SPM_CONTRACTS_HUB (cr=347407 pr=610 pw=0 time=8351793 us)
      65491             NESTED LOOPS  (cr=332023 pr=338 pw=0 time=105243492 us)
      32745              NESTED LOOPS  (cr=299202 pr=298 pw=0 time=6616161 us)
    149239               VIEW  VW_SQ_2 (cr=724 pr=53 pw=0 time=1118731 us)
    149239                HASH UNIQUE (cr=724 pr=53 pw=0 time=969487 us)
    197161                 NESTED LOOPS  (cr=724 pr=53 pw=0 time=394854 us)
          9                  VIEW  VW_NSO_1 (cr=100 pr=0 pw=0 time=1288 us)
          9                   UNION-ALL  (cr=100 pr=0 pw=0 time=1269 us)
          8                    NESTED LOOPS  (cr=100 pr=0 pw=0 time=1133 us)
         97                     INDEX RANGE SCAN ROLE_TYPE_NDX (cr=1 pr=0 pw=0 time=514 us)(object id 30685)
          8                     INDEX UNIQUE SCAN ROLE_USERS_PK (cr=99 pr=0 pw=0 time=1140 us)(object id 30691)
          1                    FAST DUAL  (cr=0 pr=0 pw=0 time=4 us)
    197161                  INDEX RANGE SCAN UFD_CONTRACT_FK_I (cr=624 pr=53 pw=0 time=197643 us)(object id 30848)
      32745               TABLE ACCESS BY INDEX ROWID SPM_CONTRACTS (cr=298478 pr=245 pw=0 time=7787761 us)
    149238                INDEX UNIQUE SCAN CONTRACTS_PK (cr=149240 pr=59 pw=0 time=2627776 us)(object id 30085)
      32745              INDEX RANGE SCAN CONT_HUB_CONTRACT_FK_I (cr=32821 pr=40 pw=0 time=1129934 us)(object id 30106)
      44590            TABLE ACCESS FULL SPM_CANDIDATES (cr=2174 pr=2170 pw=0 time=695935 us)
      35864           INDEX FAST FULL SCAN SPM_USERS_SNZ1_IDX (cr=175 pr=43 pw=0 time=49486 us)(object id 30832)
      44547          INDEX FAST FULL SCAN PMT_SCHED_HUB_IDX_1 (cr=751 pr=84 pw=0 time=101490 us)(object id 30474)
      32745         TABLE ACCESS BY INDEX ROWID SPM_PAYMENT_SCHEDULE (cr=65492 pr=552 pw=0 time=3084181 us)
      32745          INDEX UNIQUE SCAN PMT_SCHED_PK (cr=32747 pr=34 pw=0 time=512373 us)(object id 30427)
      32745        TABLE ACCESS BY INDEX ROWID SPM_CONTRACTS (cr=65492 pr=0 pw=0 time=820633 us)
      32745         INDEX UNIQUE SCAN CONTRACTS_PK (cr=32747 pr=0 pw=0 time=344850 us)(object id 30085)
          0     NESTED LOOPS  (cr=94 pr=0 pw=0 time=3048 us)
          8      TABLE ACCESS BY INDEX ROWID SPM_CONTRACT_PCR (cr=70 pr=0 pw=0 time=2354 us)
          8       INDEX RANGE SCAN CONTRACT_PCR_CHILD_FK_I (cr=62 pr=0 pw=0 time=1915 us)(object id 30116)
          0      TABLE ACCESS BY INDEX ROWID SPM_CONTRACTS (cr=24 pr=0 pw=0 time=583 us)
          8       INDEX UNIQUE SCAN CONTRACTS_PK (cr=16 pr=0 pw=0 time=328 us)(object id 30085)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: HINT: FIRST_ROWS
         31   VIEW
         31    COUNT (STOPKEY)
         31     FILTER
         31      VIEW
         31       SORT (ORDER BY)
      32745        NESTED LOOPS
      32745         NESTED LOOPS
      32745          HASH JOIN
      32745           HASH JOIN
      32745            HASH JOIN
      32745             TABLE ACCESS   MODE: ANALYZED (BY INDEX
                            ROWID) OF 'SPM_CONTRACTS_HUB' (TABLE)
      65491              NESTED LOOPS
      32745               NESTED LOOPS
    149239                VIEW OF 'VW_SQ_2' (VIEW)
    149239                 HASH (UNIQUE)
    197161                  NESTED LOOPS
          9                   VIEW OF 'VW_NSO_1' (VIEW)
          9                    UNION-ALL
          8                     NESTED LOOPS
         97                      INDEX   MODE: ANALYZED
                                     (RANGE SCAN) OF 'ROLE_TYPE_NDX' (INDEX)
          8                      INDEX   MODE: ANALYZED
                                   (UNIQUE SCAN) OF 'ROLE_USERS_PK' (INDEX
                                     (UNIQUE))
          1                     FAST DUAL
    197161                   INDEX   MODE: ANALYZED (RANGE
                                  SCAN) OF 'UFD_CONTRACT_FK_I' (INDEX)
      32745                TABLE ACCESS   MODE: ANALYZED (BY
                               INDEX ROWID) OF 'SPM_CONTRACTS' (TABLE)
    149238                 INDEX   MODE: ANALYZED (UNIQUE SCAN)
                                OF 'CONTRACTS_PK' (INDEX (UNIQUE))
      32745               INDEX   MODE: ANALYZED (RANGE SCAN) OF
                              'CONT_HUB_CONTRACT_FK_I' (INDEX)
      44590             TABLE ACCESS   MODE: ANALYZED (FULL) OF
                            'SPM_CANDIDATES' (TABLE)
      35864            INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                           'SPM_USERS_SNZ1_IDX' (INDEX)
      44547           INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                          'PMT_SCHED_HUB_IDX_1' (INDEX)
      32745          TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                         'SPM_PAYMENT_SCHEDULE' (TABLE)
      32745           INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                          'PMT_SCHED_PK' (INDEX (UNIQUE))
      32745         TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                        'SPM_CONTRACTS' (TABLE)
      32745          INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                         'CONTRACTS_PK' (INDEX (UNIQUE))
          0      NESTED LOOPS
          8       TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                      'SPM_CONTRACT_PCR' (TABLE)
          8        INDEX   MODE: ANALYZED (RANGE SCAN) OF
                       'CONTRACT_PCR_CHILD_FK_I' (INDEX)
          0       TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                      'SPM_CONTRACTS' (TABLE)
          8        INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'CONTRACTS_PK'
                       (INDEX (UNIQUE))output of tkprof attached

    Hi Tom,
    just a very basic hint:
    Are the statistics computed on your schema? It definetely helps the optimizer choose the right path to access the data.
    With up-to-date statistics, I did not experience major performance problem, while using the standard (java) search method.
    Regards
    Daniel

Maybe you are looking for