CF9 and SOLR indexing

We are using CF9 64-bit and setting up a SOLR collection for an HR application. The database contains several million records and includes resumes that we want to do full text searches on.
We started out by using cfindex to create the index but it would bomb out after just a few thousand records with an error about "warming threads" (I don't have the exact error handy but can get it later) and the indexing would have to be manually restarted. This wasn't a good solution for a multi-million record operation..
Next, we created a custom Data Import Handler (DIH) outside of CF using the instructions in the SOLR wiki. This index worked great and was very fast. However, the ColdFusion tags (cfsearch, etc.) would not work with this index. We even made sure to duplicate the required nodes (<custom1> <custom2>, etc.) that the cfindex tag would have created. Still cannot search that index.
We'd really rather not reinvent the wheel and have to write custom search code. Obviously, we like using CF and it would be great if we can use the built-in indexing and searching capability.
Any ideas on how we can either 1) make the <cfindex> work without stopping OR 2) go ahead and use the custom DIH and be able to make the <cfsearch> work properly?
Dana

I only have just over 500 records that I am trying to index, which they do consist of some large documents, and I try to loop through using the cfindex and I also get this error:
Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later
I found that if I put this in my loop
<cfscript>
    thread = CreateObject("java", "java.lang.Thread");
    thread.sleep(1000);
</cfscript>
then I no longer have the error, but it does take a long long time to index.  I also would like a better solution.
The coldfusion debugger shows that it is erroring out on the custom4 field.  I don't know if the custom fields are struggling more than the main body field.  Anyway, I am continuing to research my options.

Similar Messages

  • CF9 and Verity indexing error - Linux 64

    I am running CF 9.0.1 Standard Edition on an openSUSE 11.3 64-bit server, with Apache 2.2.15.
    I am trying to create a verity collection on the server.  The service is running as the collection is created without a problem, but when I attempt to create the index, either through the administration interface or with createindex, I receive the following errors in the sysinfo.log file.
    msg(1): Error   E0-0720 (I/O Filter): Could not load filter 'flt_kv -recognize -bifmime' which is named in your style.uni file.
    Thu Mar 10 09:44:25 2011
    msg(1): Warn    E2-0527 (Document Index): Document 8899 (/export/www/htdocs/lter/googlee5473098f17d334b.html): Stream error (-2) - SKIPPING
    Thu Mar 10 09:44:25 2011
    msg(1): Error   E0-0720 (I/O Filter): Could not load filter 'flt_kv -recognize -bifmime' which is named in your style.uni file.
    Thu Mar 10 09:44:25 2011
    msg(1): Warn    E2-0527 (Document Index): Document 8900 (/export/www/htdocs/lter/data.cfm): Stream error (-2) - SKIPPING
    The log file contains an entry for every document that it attempted to index, and the index remains empty.
    Searches of the problem have not been helpful.  I can not seem to find any recent instances of this problem.

    Hi,
    Look like Verity does not recognise the flt_kv_bifmime in the style.uni file.  Check the style.uni and add the mime type ref.
    Switch on Verity logging and check the apache log files for poss related errors.
    The apache error ../lter/googlee*.html will be related to mime type as well prob same issue as above hence the ../lter/data.cfm is not being processed. 
    As the service is running it would be worth creating a manual test spider script to crawl / index the target files to get a confirmatory error message.  The format is straightforward and examples can be found in and amongst the Verity K2 docs on Adobe Live Docs for CF 9 [ http://help.adobe.com/en_US/ColdFusion/9.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fb2 .html ]
    If possible consider changing the search to SOLR as Verity K2 is no longer supported and Apache SOLR is the replacement. Again check the CF 9 docs and you can migrate from Verity to SOLR through the CFIDE/administrator.
    Hope this helps.

  • CF 9 and Solr indexing

    I am having a problem doing a looped cfindex to a solr collection  on a large group of documents (html, pdf. txt etc). If I run the looped cfindex  on my XP dev machine, indexing, for example, 100 records of the same  type (html), the routine runs flawlessly. If I run  the same on my  Windows 2003 production server, I get the following errors after 4 or so documents:
    Error  opening new_searcher exceeded limit of  maxWarmingSearchers4_try_again_later  Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later  request: http://localhost:8983/solr/casecollection/update?commit=true&waitFlush=false&waitSearcher= false&wt=javabin&version=1,
    If I reduce the number to 4 records at a time, no errors. If I increase it to 5 records, I start to generate the errors.
    I need to index 60k+ files, so this is a bit of a concern. Any suggestions?

    I am having a problem doing a looped cfindex to a solr collection  on a large group of documents (html, pdf. txt etc). If I run the looped cfindex  on my XP dev machine, indexing, for example, 100 records of the same  type (html), the routine runs flawlessly. If I run  the same on my  Windows 2003 production server, I get the following errors after 4 or so documents:
    Error  opening new_searcher exceeded limit of  maxWarmingSearchers4_try_again_later  Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later  request: http://localhost:8983/solr/casecollection/update?commit=true&waitFlush=false&waitSearcher= false&wt=javabin&version=1,
    If I reduce the number to 4 records at a time, no errors. If I increase it to 5 records, I start to generate the errors.
    I need to index 60k+ files, so this is a bit of a concern. Any suggestions?

  • ColdFusion 11 and Solr

    I just installed ColdFusion 11. I am pretty sure I selected the option to install the addons like Solr, but when I am in the coldfusion administrator under Data & Services, I click ColdFusion Collections and I get nothing. It won't go to the page at all. If I click on Solr Services a page will come up. If I click on ColdFusion collections and then restart the coldfusion addons I get a page that comes up saying
    "Unable to retrieve collections from the Search Services.Ensure that you have installed ColdFusion Search Service and it is running."
    I am assuming it means it isn't installed.
    So I went to Adobe - ColdFusion Support Center : More Downloads and downloaded/installed the Windows Add-on Services Standalone Installer. I didn't change any of the settings or folders and installed it. I restarted the server. I logged back into the coldfusion administrator and I see the same thing. Nothing changed. When I go to view the file folders I have c:coldfuion11 and a c:coldfusionAdd-onServices. Should the coldfusionAdd-onServices folder been within the coldfusion11 folder?
    I read you can create your collection through the administrator or through coding a page. I thought maybe I need to try it this way. So I created a page to create the collection and it did not work either.
    What am I missing? Did I miss a step or something to make this work?
    Any help I can get, I would appreciate.
    I have a windows 2008 server.

    Here are just a few of the solr files for you to look at. They all appear to be SUCCESSFUL.
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\abc
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\abo
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\backup
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\backupcleaner
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\commit
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\optimize
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\readercycle
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\rsyncd-disable
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\rsyncd-enable
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\rsyncd-start
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\rsyncd-stop
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\scripts-util
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snapcleaner
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snapinstaller
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snappuller
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snappuller-disable
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snappuller-enable
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\bin\snapshooter
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\conf\admin-extra.html
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\conf\elevate.xml
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\conf\mapping-ISOLatin1Accent.txt
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\conf\protwords.txt
                              Status: SUCCESSFUL
    Install File:             C:\ColdFusion11\cfusion\jetty\solr\conf\schema.xml
                              Status: SUCCESSFUL
    On the coldfusion-out.log file it all appears ok as well.
    I can see things like this that shows solr is starting:
    Apr 6, 2015 15:16:54 PM Information [localhost-startStop-1] - Starting jaxrs...
    Apr 6, 2015 15:16:54 PM Information [localhost-startStop-1] - Starting graphing...
    Apr 6, 2015 15:16:55 PM Information [localhost-startStop-1] - Starting solr...
    Apr 6, 2015 15:16:55 PM Information [localhost-startStop-1] - Starting archive...
    Apr 6, 2015 15:16:55 PM Information [localhost-startStop-1] - Starting document...
    Apr 6, 2015 15:16:55 PM Information [localhost-startStop-1] - Starting eventgateway...
    Apr 6, 2015 15:16:55 PM Information [localhost-startStop-1] - Event Gateway Disabled.
    I can see on this same log, when I am in the coldfusion administrator I click on ColdFusion Collections I see this:
    Apr 24, 2015 10:12:21 AM Error [ajp-bio-8014-exec-6] - The request has exceeded the allowable time limit Tag: cfoutput The specific sequence of files included or processed is: C:\ColdFusion11\cfusion\wwwroot\CFIDE\administrator\solr\index.cfm, line: 331

  • What is the diffrence betweensy-tabix and sy-index

    hi
    can any one suggest me
    what is the diffrence betweensy-tabix and sy-index
    Thanks & Regards
    kalyan.

    Hi Kalyan,
    This question has been answered many times on SCN. Please make a search before posting a thread.
    Read the Rules of Engagement.
    Happy Posting.
    Regards,
    Chandra Sekhar

  • ABAP-- diff between sy-sy-tabix and sy-index

    Hi Guru's,
    Pleae can anybody expalins me what is the difference between sy-tabix and sy-index(Loop Index) ?
    Because in one case i am Modifyimg the internal table inside the do loop by giving sy-index ((Index of Internal Tables)(MODIFY scarr_tab INDEX sy-index FROM scarr_wa TRANSPORTING currcode. )  in the syntax and in other case inside loop statement i am modifyng same record by giving sy-tabix MODIFY scarr_tab INDEX  sy-tabix FROM scarr_wa TRANSPORTING currcode.) in the syntax.
    in both cases its working fine but i am not getting which one i have to use  where to modify the internal table?
    regards
    SATYA

    Hi Henry,
    SY-INDEX is the value of the current iteration. It is applicable for the following programming constructs in ABAP -
    DO...ENDDO.
    WHILE...ENDWHILE.
    SY-TABIX (TABle IndeX) is applicable to internal tables. If you scroll down in the link which Eddie has given, you will find a more detailed explanation for sy-tabix and which statements affect its value.
    Regards,
    Anand Mandalika.

  • Difference between sy-tabix and sy-index?

    tell me about sy-tabix and sy-index?what is the difference between sy-tabix and sy-index?
    Moderator Message: Please search before posting. Read the [Forum Rules Of Engagement |https://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement] for further details.
    Edited by: Suhas Saha on Jun 18, 2011 5:33 PM

    HI,
        Here is a brief description of difference between SY_TABIX and SY_INDEX and using them with several conditions.
    SY-TABIX
    Current line of an internal table. SY-TABIX is set by the statements below, but only for index tables. The field is either not set or is set to 0 for hashed tables.
    APPEND sets SY-TABIX to the index of the last line of the table, that is, it contains the overall number of entries in the table.
    COLLECT sets SY-TABIX to the index of the existing or inserted line in the table. If the table has the type HASHED TABLE, SY-TABIX is set to 0.
    LOOP AT sets SY-TABIX to the index of the current line at the beginning of each loop lass. At the end of the loop, SY-TABIX is reset to the value that it had before entering the loop. It is set to 0 if the table has the type HASHED TABLE.
    READ TABLE sets SY-TABIX to the index of the table line read. If you use a binary search, and the system does not find a line, SY-TABIX contains the total number of lines, or one more than the total number of lines. SY-INDEX is undefined if a linear search fails to return an entry.
    SEARCH <itab> FOR sets SY-TABIX to the index of the table line in which the search string is found.
    SY-INDEX
    In a DO or WHILE loop, SY-INDEX contains the number of loop passes including the current pass.
    Hope this helps.
    Thank you,
    Pavan.

  • Diff b/w btree and bitmap index ?

    What is the difference between btree and bitmap index ?
    which one to used and when.
    how they are differ from each other.

    you'd love to see
    http://www.oracle.com/technology/pub/articles/sharma_indexes.html

  • What is the difference between "Invisible" (11g) and "virtual" index?

    Hi
    What is the difference between the "Invisible" index and "virtual" index?
    Thanks
    Balaji

    Indexes can be visible or invisible. An invisible index is maintained by DML operations and cannot be used by the optimizer. Actually takes space, but is not to be used as part of a potential access path.
    AFAIK, a virtual index is created by the tools used in SQL statement access path tuning to provide an alternative for the optimizer to test. It does not take any real space as it is a pure in memory definition.

  • Access path difference between Primary Key and Unique Index

    Hi All,
    Is there any specific way the oracle optimizer treats Primary key and Unique index differently?
    Oracle Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Sample test data for Normal Index
    SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
    Sequence created.
    SQL>  insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
            259  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> Sample test data when using Primary Key
    SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
    Sequence created.
    SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1727568366
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB1 | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
    Table altered.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 2995826579
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    59   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 |   488K|    59   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6867  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> If you see here the even though statistics were gathered,
         * In the 1st table T_TEST_TAB, the table is still using FULL table access after creation of index.
         * And in the 2nd table T_TEST_TAB1, table is using PRIMARY KEY as expected.
    Any comments ??
    Regards,
    BPat

    Thanks.
    Yes, ignored the NOT NULL part.Did a test and now it is working as expected
    SQL>  create table t_test_tab(col1 number not null, col2 number, col3 varchar2(12));
    Table created.
    SQL>
    create sequence seq_t_test_tab start with 1 increment by 1 ;SQL>
    Sequence created.
    SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  set autotrace traceonly
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6912  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>  create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 4115006285
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    63   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| IDX_T_TEST_TAB | 99999 |   488K|    63   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6881  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>

  • Difference between primary key and primary index

    Dear All,
             Hi... .Could you pls tell me the difference between primary key and primary index.
    Thanks...

    Hi,
    Primary Key : It is one which makes an entry of the field unique.No two distinct rows in a table can have the same value (or combination of values) in those columns.
    Eg: first entry is 111, if you again enter value 111 , it doesnot allow 111 again. similarly for the strings or characters or numc etc. Remember that for char or numc or string 'NAME' is not equal to 'name'.
    Primary Index: this is related to the performance .A database index is a data structure that improves the speed of operations in a table. Indices can be created using one or more columns, providing the basis for both rapid random lookups and efficient ordering of access to records. The disk space required to store the index is typically less than the storage of the table (since indices usually contain only the key-fields according to which the table is to be arranged, and excludes all the other details in the table), yielding the possibility to store indices into memory from tables that would not fit into it. In a relational database an index is a copy of part of a table. Some databases extend the power of indexing by allowing indices to be created on functions or expressions. For example, an index could be created on upper(last_name), which would only store the uppercase versions of the last_name field in the index.
    In a database , we may have a large number of records. At the time of retrieving data from the database based on a condition , it is a burden to the db server. so whenever we create a primary key , a primary index is automatically created by the system.
    If you want to maintain indices on other fields which are frequently used in where condition then you can create secondary indices.
    Reward points if helpful.
    Thanks,
    Sirisha..

  • Difference between primary eindex and secondary index?

    hi experts
    pls answer me
    difference between primary eindex and secondary index?
    rewads apply.
    thanks.
    naresh.

    hi,
    check this link.
    http://help.sap.com/saphelp_47x200/helpdata/en/cf/21eb2d446011d189700000e8322d00/frameset.htm
    A difference is made between Primary & Secondary indexes to a table. the primary index consists of the key fields of the table and a pointer to the non-keys-fields of the table. The Primary index is generated automatically when a table is created and is created in the datebase as the same times as the table. It is also possible to define further indexes to a table in the ABAP/4 dictionary, which are then referred to as Secondary indexes.
    Always it is not mandatory that an index should have all the key fields of a table. To see the index of a table
    goto SE11->specify table name->click on the indexes... button on the application toolbar.
    Based on your requirement you can you any of those index fields in the where clause of your query. Always its a better practice to use the index fields in the order specified. While selecting the records from a table it is always better to select the fields in the same order as specified in the table.

  • Difference between unique constraint and unique index

    1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
    2. Is Composite index of 3 columns x,y,z better
    or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
    3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
    null

    1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
    You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
    2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
    3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
    Steve
    null

  • What's difference between ASC and DESC index

    1 select count(*)
    2* from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD')
    COUNT(*)
    11971
    SQL> create index i_big_emp_hiredate on big_emp(hiredate);
    Index created.
    SQL> set autot trace
    SQL> select empno, ename, hiredate
    2 from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD') ;
    11971 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 11766 | 218K| 19 |
    |* 1 | TABLE ACCESS FULL| BIG_EMP | 11766 | 218K| 19 |
    SQL> drop index i_big_emp_hiredate;
    Index dropped.
    SQL> create index i_big_emp_hiredate on big_emp(hiredate desc);
    Index created.
    SQL> select empno, ename, hiredate
    2 from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD') ;
    11971 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 29 | 551 | 4 |
    | 1 | TABLE ACCESS BY INDEX ROWID| BIG_EMP | 29 | 551 | 4 |
    |* 2 | INDEX RANGE SCAN | I_BIG_EMP_HIREDATE | 53 | | 2 |
    i have 2 questions
    1. In "Expert one-on-one Oracle", Tom said, there is no deference between ASC and DESC index in case of one column because Oracle can just read in reverse order. but my test made me confused. why Oracle did "full table scan" only in ASC index???
    2. using "set autot trace" command. i believed the the "Rows" column mean the rows that Oracle access. Can you explain why the rows are 29(DESC) and 11766(ASC) in spite of the result is 11971. what is the exact meaning of "Rows" column in execution plan

    I think what you're seeing is a bug in the optimizer. If you had printed up the predicate section of the execution plan, this would be more obvious. I have the query:
    select *
    from   t1
    where  d1 between to_date('01-jan-2001')
              and     to_date('31-dec-2003')
    ;This returns one row per day for 3 years, and when a normal index is created on it, the optimizer calculates the correct cardinality and uses a sensible set of predicates. But when I use a descending index, this is what I get:
    Execution Plan
    Plan hash value: 1429545322
    | Id  | Operation                   | Name  | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT            |       |  1097 | 21940 |     2 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1    |  1097 | 21940 |     2 |
    |*  2 |   INDEX RANGE SCAN          | T1_I1 |     5 |       |     2 |
    Predicate Information (identified by operation id):
       2 - access(SYS_OP_DESCEND("D1")>=HEXTORAW('8798F3E0FEF8FEFAFF')  AND
                  SYS_OP_DESCEND("D1")<=HEXTORAW('879AFEF8FEF8FEFAFF') )
           filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("D1"))>=TO_DATE('2001-01-0
                  1 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
                  SYS_OP_UNDESCEND(SYS_OP_DESCEND("D1"))<=TO_DATE('2003-12-31 00:00:00',
                  'yyyy-mm-dd hh24:mi:ss'))Note the introduction of the strange sys_op_descend() function - which is related to the descending index implemention, and the extra FILTER predicates which introduce a significant extra selectivity effect. The optimizer is double-counting on selectivity effects, and introducing extra factors of 1% and 5% (I haven't checked exact details) due to the functions applied to columns and the range-based predicates.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • What is the difference between the drop and create the index and rebuild index ?

    Hi All,
    what is the difference between drop and create index & rebuild index ? i think both are same...Please clarify if both are same or any difference...
    Thanks in Advance,
    rup

    Both are same. Rebuilding an index drops and re-creates the index. 
    Ref:
    SSMS - https://technet.microsoft.com/en-us/library/ms187874(v=sql.105).aspx
    TSQL - https://msdn.microsoft.com/en-us/library/ms188388.aspx
    I would suggest you to also refer one of the best index maintenance script as below:
    https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

Maybe you are looking for

  • File sending via Bonjour fails in 10.8.1 Messages app

    After upgrading to Mountain Lion 10.8.1, we are unable to share files via Messages app over Bonjour. We have both (the two people doing this) upgraded to 10.8.1, we've both tried rebooting, no change. We try to send the file, then after about 10 seco

  • Update query error in php

    please help. I ran the query below in php4 + oacle9i <?php $quantity = 12345; $grace=20; $conn = ociLogon('username', 'pwd','host'); if (!$conn) { $e = ocierror(); print htmlentities($e['message']); exit; $query = "udpate product set status_dtm = (st

  • Randomly choose color for shape

    I am working on a project for fun in which I need to have 144 squares choose a random color out of a pool (array) of predetermined colors and I need these shapes to do so every 1 second in the project. I was thinking the only way would be to write an

  • Danger of email attachments

    I'd appreciate anyone's thoughts on the risk of email attachments on a business iPhone. We have everything locked down thru activesync. I can see that there is a potential data leakage risk - but the same people will also have webmail access on any d

  • Cant see 64 bit odbc data source

    hi.. i using bo 4.0 sp2 patch 3....64 bit server and i trying to connect 64 bit sql server. i created 64 bit odbc data source from syswow64>odbcat32 but; in designer trying create a odbc connection for this data source but i cannot see this data sour