Pre-loading a table, populating data

hopefully someone has already crossed this bridge,
does anyone know how to pre-load a table and then populate the data returned from a query? I have seen websites like ingrammicro.com that display the results from a search where you can see the table and some of the data , and then slowly you start seeing pricing populated within the table. maybe this is more of an HTML question.not sure. thanks for any information you can give.
thanks, Tim

yes, that will work. thanks.

Similar Messages

  • ORA-00054 error when loading Oracle table using Data Services

    Hello,
    we are facing ORA-00054 error when loading Oracle table using BO Data services
    (Oracle 10g database, BODS Xi 3.2 SP3)
    Test Job performs
    1- truncate table
    2- load table (tested in standard and bulk load modes)
    Scenario when issue happens is:
    1- Run loading Job
    2- Job end in error for any Oracle data base error
    3- When re-running the same Job, Job fails with following error
         ORA-00054: resource busy and acquire with NOWAIT specified
    It seems after first failure, Oracle session for loading the table stays active and locks the table.
    To be able to rerun the Job, we are forced need to kill Oracle session manually to be able to run the Job again.
    Expected behaviour would be : on error rollback modifications made on table and BODS stops Oracle session in a clean way.
    Can somebody tell me / or point me to any BODS best practice about Oracle error handling to prevent such case?
    Thanks in advance
    Paul-Marie

    the ora-0054 can occure depending how the job failed before. If this occures you will need the DBA to release the lock on the table in question
    Or
           AL_Engine.exe on The server it creates the Lock. Need to Kill Them. Or stop it..
    This Problem Occurs when we select The Bulkloading Option in orclae  We also faced the same issue,Our admin has Killed the session. Then everything alright.

  • Dimension Table populating data

    Hi
    I am in the process of creating a data mart with a star schema.
    The star schema has been defined with the fact and dimension tables and the primary and foreign keys.
    I have written the script for one of the dimensions and would like to know when the job runs on a daily basis should the job truncate the table every day and rebuild the dimension table or should it only add new records to the table. If it should add only
    new records to the table how do is this done?
    I assume that the fact table job is run once a day and only new data is added to it?
    Thanks

    It will depend on the volume of your dimensions. In most of our projects, we do not truncate, we update only updated rows based on a fingerprint (to make the comparison faster than column by column), and insert for new rows (SCD1). For SCD2 we apply
    similar approach for updates and inserts, and expirations in batch (one UPDATE for all applicable rows at the end of the package/ETL). 
    If your dimension is very large, you can consider truncating all data or deleting only affected modified rows (based on nosiness key) to later reload those, but you have to be carefully maintaining the same surrogate keys reference by your
    existing facts.
    HTH,
    Please, mark this post as Answer if this helps you to solve your question/problem.
    Alan Koo | "Microsoft Business Intelligence and more..."
    http://www.alankoo.com

  • Java Procedure  to load Oracle Table from flat file

    Hi,
    I am trying to load oracle table from data in flat file.
    Anybody help me out.
    I am using following code but it is giving invalid sql statement error
    try {     
            // Create the statement
          Statement stmt = conn.createStatement();
            // Load the data
         String filename = "c:\\temp\\infile.txt";
         String tablename = "TEST";
         // If the file is comma-separated, use this statement
         stmt.executeUpdate("LOAD DATA INFILE"  + "c:\\temp\\infile.txt" + "INTO TABLE "
         + "TEST" + " FIELDS TERMINATED BY ','");
                 stmt.close();
                 conn.close();
         } catch (SQLException e) {
              e.printStackTrace();
         }I will appriciate your help.
    Thanks.

    I tried the following too but getting same error.
    try {
                        // Create the statement
                        Statement stmt = conn.createStatement();
                        // Load the data
                        String filename = "c:\\temp\\infile.txt";
                        String tablename = "TEST";
                       // If the file is comma-separated, use this statement
                        stmt.executeUpdate("LOAD DATA INFILE \"" + filename + "\" INTO TABLE "
                         + tablename + " FIELDS TERMINATED BY ','");
                        //If the file is terminated by \r\n, use this statement
                        //stmt.executeUpdate("LOAD DATA INFILE \"" + filename + "\" INTO TABLE "
                        //+ tablename + " LINES TERMINATED BY '\\r\\n'");
                                               stmt.close();
                                               conn.close();
                   } catch (SQLException e) {
                        e.printStackTrace();
                   }Any Idea ?

  • Pre-loading the cache

    I'm attempting to pre-load the cache with data and have implemented controllable caches as per this document (http://wiki.tangosol.com/display/COH35UG/Sample+CacheStores). My cache stores are configured as write-behind with a 2s delay:
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>PARTY_CACHE</cache-name>
              <scheme-name>party_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <distributed-scheme>
                <scheme-name>party_cache</scheme-name>
                <service-name>partyCacheService</service-name>
                <thread-count>5</thread-count>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                         <write-delay>2s</write-delay>
                        <internal-cache-scheme>
                            <local-scheme/>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>spring-bean:partyCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
         </caching-schemes>
    </cache-config>
    public static void enable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.TRUE);
    public static void disable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.FALSE);
    public static boolean isEnabled(String storeName) {
            return ((Boolean) CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).get(storeName)).booleanValue();
    public void store(Object key, Object value) {
            if (isEnabled(getStoreName())) {
                throw new UnsupportedOperationException("Store method not currently supported");
        }The problem I have is that what seems to be happening is:
    1) bulk loading process calls disable() on the cache store
    2) cache is loaded with data
    3) bulk loading process calls enable() on the cache store ready for normal operation
    4) the service thread starts to attempt to store the data as the check to see if the store is enabled returns true because we set it to true in step 3
    so is there a way of temporarily disabling the write-delay or changing it programatically so step 4 doesn't happen?

    Adding
    Thread.sleep(10000);after loading the data seems to solve the problem but this seems dirty, any better solutions?

  • How syncronize with the data base after pre-loading the data

    Hi,
    I have pre-loaded the data from the database table into the cache.
    If the key is not found in the cache i want to it to connect to database and get the value from the table. How to achieve this?

    Hi JK,
    I have pasted my cache loader code, config file and the main class in the other post but i m not sure what is the issue with it. Its not working. Please can you tell me what might be the issue with that piece of code. I m not getting any exception either but the load() or loadAll() method is not at all getting triggered on invoking cache.get() or cache.getAll() method. What might be the cause for this issue?
    Can you give me the coherence-cache-config.xml contents?
    I m not sure whether its the issue with the config file because i have read some where that refreshaheadfactor is required to trigger the loadAll() method.
    Edited by: 943300 on Jul 4, 2012 9:57 AM

  • Loading MS Access Table and Data into Oracle

    Hi,
    I have few tables in MS Access. I want to create same layout of tables in Oracle and want to populate data from MS Access tables to Oracle tables.
    Please let me know if there is a way by which I can create tables and load data automatically (thru some option or script)?
    I have Oracle 10g database and its clients.
    Thanks in advance,
    Rajeev.

    You can use Oracle migration workbench
    Loading MS Access Table and Data into Oracle
    It´s very easy to use and good to import
    regards,
    Felipe

  • F4 help for custom table field - to be used when populating data thru SM30

    Hi,
    I have a custom table with 5 fields - say A, B, C, D and E. While populating data to the table through SM30, I need to create a F4 help for the field C. A  custom function module needs to be used.
    I have created a module for the same in the event PROCESS ON VALUE-REQUEST of the function group of the table.
    But the F4 for field C depends on the values put in fields A and B.
    I am not able to get the values of fields A and B from within the module PROCESS ON VALUE-REQUEST.
    Please help me to create the F4 help.

    hii,
    This is the piece of code i have used in one of my SM30 to get f4. mopdify according to ur need and use.
    revert back for further help.
    w_dynpread-fieldname = 'ZSITEDCDATA-SITE'.
      APPEND w_dynpread TO i_dynpread.
      CALL FUNCTION 'DYNP_VALUES_READ'
        EXPORTING
          dyname               = 'SAPLZSITEDCDATA'
          dynumb               = sy-dynnr
          translate_to_upper   = 'X'
        TABLES
          dynpfields           = i_dynpread
        EXCEPTIONS
          invalid_abapworkarea = 1
          invalid_dynprofield  = 2
          invalid_dynproname   = 3
          invalid_dynpronummer = 4
          invalid_request      = 5
          no_fielddescription  = 6
          invalid_parameter    = 7
          undefind_error       = 8
          double_conversion    = 9
          stepl_not_found      = 10
          OTHERS               = 11.
      IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      READ TABLE i_dynpread INTO w_dynpread INDEX 1.
      IF sy-subrc IS INITIAL.
        SELECT land1 FROM t001w
          INTO TABLE i_site
          WHERE werks EQ w_dynpread-fieldvalue.
        IF i_site[] IS NOT INITIAL.
          DATA: lv_line TYPE i.
          CLEAR lv_line.
          DESCRIBE TABLE i_site LINES lv_line.
          IF lv_line GT 1.
            CALL FUNCTION 'F4IF_INT_TABLE_VALUE_REQUEST'
              EXPORTING
                retfield        = 'ZSITEDCDATA-SITE_COUNTRY'
                dynpprog        = 'SAPLZSITEDCDATA'
                dynpnr          = sy-dynnr
                window_title    = 'Site Country'
                value_org       = 'S'
              TABLES
                value_tab       = i_site[]
                return_tab      = i_return
              EXCEPTIONS
                parameter_error = 1
                no_values_found = 2
                OTHERS          = 3.
            IF sy-subrc <> 0.
              MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                      WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
            ELSE.
              READ TABLE i_return INTO w_return INDEX 1.
              IF sy-subrc IS INITIAL.
                zsitedcdata-site_country = w_return-fieldval.
              ENDIF.
            ENDIF.
    Thanks ,
    Gaurav

  • SQL LOADER , EXTERNAL  TABLE and ODBS DATA SOURCE

    hello
    Can any body help loading data from dbase file .dbt to an oracle 10g table.
    I tried last day with SQL LOADER, EXTERNAL table , and ODBC data source.
    Why all of these utilities still failing to solve my problem ?
    Is there an efficient way to reach this goal ?
    Thanks in advance

    export the dbase data file to text file,
    then you have choice of using either sql loader or external table option to use.
    regards

  • Extract Data from XML and Load into table using SQL*Loader

    Hi All,
    We have a XML file (sample.xml) which contains credit card transaction information. We have a standard SQL*Loader control file which loads the data from a flat file and the control file code is written as position based method. Our requirement is to use this control file as per our requirement(i.e) load the data into the table from our XML file), But we need help in converting the XML to a flat file or Extract the data from the XML tags and pass the information to the control file and in turn it loads the table.
    Your suggestion is highly appreciated.
    Thanks in advance

    Hi,
    First of all go to PSA maintanance ( Where you will see PSA records ).
    Goto list---> Save-> File---> Spreadsheet (Choose Radio Button)
    > Give the proper file name where you want to download and then-----> Generate.
    You will get ur PSA data in Excel Format.
    Thanks
    Mayank

  • How to load  a tables one partition of data in to OWB target table

    Hey ,
    I want to load only one partition data from source (size in Tera bytes) in OWB tables. I used filter condition as per the partition done on a key value. But Its take so much time to load the data into the target table.
    I feel to configure the source table as well as target table and give some HINT command in operator settings . Is it right to improve the performance of loading .
    Can I use PEL option in source as well as target .
    suggest me please
    thanks
    murari

    Hi Andreas,
    This error has not been documented yet, but I got the information from the other forums.
    Error Name                            Error Code (hex)    Description
    LinBusErrorTxSyncTimeout         1040                  The LIN interface master task attempted to send a
                                                                               sync byte and did not self not receive the sync byte
                                                                               within the timeout period.
    This is the description. The problem is solved, the baud rate i was giving was higher than the slave units baud rate. Now it seems to be functioning correctly.
    Thank you.

  • Load multiple rows of data in table

    hi i want to load multiple rows of data into table from flat file.
    i have oracle 10g enterprise edition, and can not find and envok
    sqlldr. edit command only does one line at a time.
    help please.

    thank you i got sqlldr.
    new problem
    i have created a control file customers.ctl
    LOAD DATA
    INFILE 'C:\ADD_CUST.TXT'
    INTO TABLE CUSTOMERS
    (CUSTOMER_NUMBER,LAST,FIRST,STREET,CITY,STATE,ZIP_CODE,BALANCE,CREDIT_LIMIT,SLSREP_NUMBER)
    and i have data to be loaded into the table in ADD_CUST.txt file.
    this is one line of data there are 9 similer lines.
    '124','ADAMS','SALLY','481 OAK','LANSING','MI','49224',818.75,1000,'03'
    now when i run sqlldr from cmd line below is the message i get.
    SQL*Loader: Release 10.2.0.1.0 - Production on Wed Feb 13 21:04:08 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    C:\>SQLLDR hr/hr control=CUSTOMERS.CTL log=customers.log
    SQL*Loader: Release 10.2.0.1.0 - Production on Wed Feb 13 22:10:19 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Commit point reached - logical record count 9
    Commit point reached - logical record count 10
    C:\>
    please help

  • Data Pre Load

    Hi,
    I have two servers with Server 1 and Server 2(I mean two different machine) and both the servers are configured with Distributed cache. For example Server 1 have a cache name "ABC" and Server 2 have a cache name "XYZ".
    I want to pre load the same data to both the servers with different cache names and my JOB java class is pointing to tangosol override file for Server 1.
    How can I load the data to server 2 with server 1 tangosol override file?
    I dont want to use push replication patteren.
    Thanks,
    Raj.

    Hi Raj
    You cannot get the storage enabled members from an Extend client as the cache service will be a RemoteCacheService or more likely a SafeCacheService wrapping a RemoteCacheService.
    I am not sure I follow exactly what you are trying to achieve from your posts above.
    Why do you want to get the storage enabled members - is is so that you can make the same call invocationService.execute(task,Collections.singleton(members.toArray()[0]), null); to cluster two from cluster one?
    If you want to do this then you need two invocable, one is LoaderInvocable class you mention above and the other looks like this:
    public class RemoteLoaderInvocable extends AbstractInvocable {
        private String cacheName;
        private String springBeanId;
        public RemoteLoaderInvocable() {
        public RemoteLoaderInvocable(String cacheName, String springBeanId) {
            this.cacheName = cacheName;
            this.springBeanId = springBeanId;
        @Override
        public void run() {
            NamedCache cache = CacheFactory.getCache(cacheName);
            Set<Member> memberSet = ((PartitionedService) cache.getCacheService()).getOwnershipEnabledMembers();
            Member[] members = memberSet.toArray(new Member[memberSet.size()]);
            LoaderInvocable task = new LoaderInvocable(cacheName,springBeanId);
            getService().query(task, members);
    }You would need to make the above class properly implement POF of whatever you are using for serialization.
    You then declare a remote invocation scheme like you have for the remote cache scheme in cluster 1
    <remote-invocation-scheme>
        <scheme-name>tier-2-proxy-invocation-scheme</scheme-name>
        <service-name>ExtendTcpProxyInvocationService</service-name>
        <initiator-config>
            <tcp-initiator>
                <remote-addresses>
                    <socket-address>
                        <address>localhost</address>
                        <port>6005</port>
                    </socket-address>
                </remote-addresses>
                <connect-timeout>20s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
                <request-timeout>20s</request-timeout>
            </outgoing-message-handler>
        </initiator-config>
    </remote-invocation-scheme>Then in cluster1 you can do...
    InvocationService cluster2Service = CacheFactory.getService("ExtendTcpProxyInvocationService");
    service.query(new RemoteLoaderInvocable(cacheName, springBeanId), null);That will allow you to execute your LoaderInvocable from cluster 1 against storage enabled members in cluster 2, but as I said, I don't see what you are really trying to do.
    JK

  • Populating a load-status table from SQL*Loader

    Hello,
    I am using SQL*LDR to load a table and once I'm done with this load I am supposed to populate a status table which will capture the 'SYSDATE', and the total number of rows I loaded in the other table.
    Can anybody help me?
    Thanks

    BTW, the load-status table would take the error-record-count as well as the load-count?
    Sorry missed that earlier!

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

Maybe you are looking for