Rbo + history data index/no index table (cast ())

Hi guys,
I have couple of questions and maybe you can help me with some advices.
I have the misfortune to work with a 10g system that works on RBO.
1. and I have a lot of tables containing activity per id.
for example:
i have a table like: moved_history
lets say I have 100 k ids in system, and each month there are some ids moved (lets say 20 k rows. an Id can be moved multiple types.
so each month we will add extra 20k lines to this table. most of my queries are accessing this type of table with a predicate like entry_date between sysdate and sysdate - 2 months(or 1.5 months or 1 month).
I have a lot of tables like this: change_history, pay_history...etc
the question: it is Ok to create an index on entry_date. at the beginning this will not be useful, because it will select entire table using the index. but after a lot of months, this index will start to be usable?!.(I am using RBO and oracle it is using it no matter what)
2.(second question)
let’s say a have a table called dummy: ~100k records.
a part of it ~80k it is used multiple times in selects in a pl/sql process.
Initially I select with bulk collect and put those 80k(or in other case 10K or 1k) rows in nested tables.
after that I use in sql the nested table, instead of using the table (with table(cast (.....)).
Is this a correct approach?
What are the advantages and the disadvantages.
When using the nested table, I dont have any index on it(although I want to access all rows from that nested table and a full scan will be best), any statistics on it (in case, at some point I will move on CBO :) ), I use memory from PGA.
When using the table directly, oracle knows the date, I am using SGA, maybe the data from table is cached, so the access speed will be the same like the one with nested tables.
please correct me if I am wrong.
How can I test/benchmark this? what should I look at?
there is another option. the solution depends on how I will join, access, select from tables? and again, how can I test this in a good way.
thanks guys for your time

Well it is a bit hard to say. Someone has decided to go with a desupported optimizer
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/glossary.htm#sthref1601
>
Note:
This feature has been desupported.
>
And since the optimizer is responsible for the efficient execution of queries
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/sqlplsql.htm#sthref3528
>
All SQL statements use the optimizer, a part of Oracle that determines the most efficient means of accessing the specified data.
>
I can only deduce that they do not care about optimization or query efficiency and that unfortunately your hands are pretty much tied.

Similar Messages

  • Clear the History Data Automatically in Production Planning Table MF50

    Dear Friends,
    I have the problem to set the Production Planning Table (Txn MF50) to
    clear the history data automatically. Currently, all the history data
    in the planning table has to be cleared manually.
    Appreciate your help.
    Rgds
    Zahari Yusof

    instead of just removing the whole history like private browsing do, i just want to remove browsing/download history and form&search. i didn't want to remove cookies and active logins, heheh. thank you

  • Dividing payment history data (KNB4) under sales area of a customer..

    Hello Everybody,
              I need to divide the payment history data available in KNB4 table according to divisions which are available under one customer. But payment history data is automatically updated when we are posting the payments ( If "payment history record" check box is checked in the company code details of a customer).
              Is it possible to customize this filtering?
             Please answer for this Question as soon as possible..
              Thanks in Advance..
    Regards
    Swathi

    Hi Swathi,
    A custom report need to be developed to achieve this.
    For a particulat customer, Need to pick up the billing documents in the month and at billing item divisions needs to be considered.
    Regs,
    Sai

  • Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.

    We have configured OGG on a  near-DR server. The extracts are configured to work in ALO Mode.
    During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
    This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
    Points to be noted:
    1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
    2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
    3) There is no connection to PROD or DR Database
    4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
    Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
    Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
    Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?

    Hi Nick.W,
    The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
    I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
    The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
    The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
    Regarding the 3rd point,
    a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
    b. We have added  "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements.

  • Multiple language data in the same table and indexes

    We have 8.1.5 database with
    NLS_LANGUAGE=ENGLISH and NLS_TERRITORY=UNITED KINGDOM set at the database level. We also set the NLS_LANG appropriately when we index the data. We also use the preference setting BASE_LETTER=YES when indexing.
    Our tables contain book titles which is what we index using interMedia. Being an online book store, this table may contain titles in English (most of the titles are in English being UK) and some titles in French. Please note the French language has accents and diacritics. Normally in a French database with NLS parameters set correctly and BASE_LETTER=YES, if we search for a title with accent / diacritic it will return the record correctly and match the rows with or without the accents (e.g. searching for video will return both vidio (not the accent on e) and video). However, with the UK settings the accents are not ignored during indexing and it only returns the records that exactly match the search word (e.g. searching for video returns only video and searching for vidio returns only vidio (note the accents on e in the last two videos)).
    Question: Is there any way to appropriately index multiple language specific titles in the same database? If yes, how? Is it possible at all? Suggestions, ideas are welcome.
    null

    Similar request; if there are several different languages used in a database, but a search word can be typed in by anyone and not necessarily in the default language, I still want to be able to pick up all the records (including text in blobs) in any language where my search word = the equivelant word in the other languages; eg. searching on "tree", the search would bring back a document that contained "l'arbre". I then want to set the default language, and allow the user to select a "search in these languages" option to expand the search from the default "tree" to all languages in the database.
    I know about the multi_Lexer but does it extend functionality to this level?

  • How to make use of Index of a table in report to fetch data?

    Hi,
    I need a sample code for select statement which is making use of INDEX of a table
    to fetch data.
    Doubt:
    Can I fetch all the fields in the table by passing certain key fields of INDEX in where condition?

    Hi Raja,
    1) Mention the fields that you wish from database table (incase you don't need all the fields from the database table).
    2) Don't use the INTO CORRESPONDING FIELDS OF TABLE ztable clause.
    3)Instead use INTO TABLE ztable (But take care that during the declaration of the ztable, the fields declared are in order that in database table to fetch the Records in sequence).
    Please Find the Syntax and Code Below..
    SELECT *  FROM <TABLE>
      WHERE  <WHERE>
        %_HINTS ORACLE 'INDEX("<TABLE>~<INDEX ID")'.
    SELECT carrid
    INTO TABLE t_spfli
    FROM spfli
    WHERE carrud IN s_carrid AND
    connid IN s_connid
    %_HINTS ORACLE 'INDEX("&SPFLI&" "SPFLI~XXX")'.
    Hope this Is helpFul
    Thanks
    kalyan

  • Why should we create index on  the table after inserting data ?

    Please tell me the Reason, why should we create index on the table after inserting data .
    while we can also create index on the table before insertion of the data.

    The choice depends on a number of factors, the main being how many rows are going to be inserted in the table as a percentage of the existing rows, or the percentage growth.
    Creating index after a table has been populated works better when the tables are large or the inserts are large for the following reasons
    1. The sort and creation of index is more efficient when done in batch and written in bulk. So works faster.
    2. When the index is being written blocks get acquired as more data gets written. So, when a large number of rows get inserted in a table that already has an index , the index data blocks start splitting / chaining. This increases the "depth" of the inverted b-tree makes and that makes the index less efficient on I/O. Creating index after data has been inserted allows Orale to create optical block distribution/ reduce splitting / chaining
    3. If an index exists then it too is routed through the undo / redo processes. Thats an overhead which is avoided when you create index after populating the table.
    Regards

  • Importing data tables into data tablespace and indexes into tablespaces

    Hi
    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...

    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...
    imp userid=/user/passwd show=y indexfile=import.sql indexes=n full=y
    imp userid=/user/passwd show=y indexfile=import2.sql full=y
    Edit the import.sql and import2.sql to modify the tables' tablespace and indexes tablespace.
    execute import.sql the script in the database. this will create the tables in their respective tablespace.
    imp userid=/user/passwd full=y ignore=y indexes=n constraints=y - to import just the data since the tables have already been created.
    imp userid=/user/passwd full=y ignore=y rows=n  - to import just the indexes since the tables and data have already been imported.

  • Data Pump -Importing one table index

    Is it possible to import one table index alone(any table ex emp ) .If it can be how should the param should look like ..
    Thanks and Regards
    harris

    I can't think of anything that would prevent this from working. You just need to make sure that the large table does not have any ref constraints, or other associations with the other tables that may get screwed up while the other users are using the database.
    Dean

  • Can I select the data of the indexes?

    Hi everybody,
    Oracle storages indexes in index type segments as I know.
    Is it possible to see the data of these index segments like a table or view?
    So I would like to see not only the structue of it with:
    select * from dba_segments SG where SG.segment_name = ’index_name’
    I think something like this:
    For example if I have a table named Table1.
    (please don’t see the format of the rowid):
    Rowid....field1
    1_1.......AA
    1_2.......AB
    1_3.......BB
    and I have an index on field1 named ndx_t1_f1.
    Create index ndx_t1_f1 on Table1(field1);
    I would like to see something like this
    (if it is a simple Btree index like the ndx_t1_f1):
    Select * from some_schema.???ndx_t1_f1???
    Rowid..........ParentRowid..........ValuePrefix..........SolutionRowid
    2_1
    2_11............2_1.......................A
    2_12............2_1.......................B
    2_111..........2_11.....................AA......................1_1
    2_112..........2_11.....................AB......................1_2
    2_121..........2_12.....................BB......................1_3
    And if i run:
    select field1 from table1 where filed1 like 'B%'
    oracle can use ndx_t1_f1 index like this:
    2_1 -> 2_12 -> 2_121 -> BB
    Yes, I know, there are some promlems with this imagine, but is it possible, or is it a silly question?
    Thanx: lados.

    Hi,
    You asked: „Why do you want to do that?”
    Well, I have a table T with fields F1, F2, F3 ...
    If I run:
    select F2, T.* from T where F1=12345;
    the result is one record and the value of F2 is ‘A’.
    But if I run:
    select F2, F1 from T where F1=12345;
    the result is one record and the value of F2 is ‘B’.
    And this is the same record. Oppps!!!!!
    I think the cause of the problem is the next:
    I have an index on F1, F2 named ndx_T_F1_F2
    In the second case: oracle uses only the index blocks and no table blocks, because all asked data exists in the index blocks.
    In the first case: oracle uses the table blocks too, because I asked all field.
    And I think there is a corrupted data on the index.
    I created a new index on (F1,F2,F3), and I suggest to use this.
    select /*+index(T, ndx_T_F1_F2_F3)*/ T.F2 from T where F1=12345;
    the result is one record and the value of F2 is ‘A’.
    But what is strange, oracle didn’t realise this inconsistent situation.
    (It could be an earlier crash, I don’t know, I’m new at this workplace)
    I think, the problem can solve with recreating the index ndx_T_F1_F2.
    But it’s a very special situation for me, and I would like to see the corrupted data concretly with my eyes.
    I think, that
    Select * from some_schema.???ndx_t1_f1???
    would I see like a view, but it would based on procedures and not tables.
    It would be enough to me, because I only could see the data in the index.
    But it is impossible as you wrote, isn’t it?
    Thanx: lados.

  • Loop using index read ( internal table)

    Hi,
    I thought of impelementing loop at ITAB using read statements, Here it goes.
    REPORT Z_LOOP_IMPROVE.
    DATA : T_OUTPUT TYPE STANDARD TABLE OF MARC WITH HEADER LINE,
    L_TABIX TYPE SY-TABIX.
    DEFINE ILOOP.
    DO.
    IF SY-INDEX EQ '1'.
    READ TABLE &1 WITH KEY &2 = &3 BINARY SEARCH.
    IF SY-SUBRC <> 0.
    EXIT.
    ENDIF.
    ELSE.
    L_TABIX = SY-TABIX + 1.
    READ TABLE &1 INDEX L_TABIX.
    IF SY-SUBRC NE 0 OR &1-&2 NE &3.
    EXIT.
    ENDIF.
    ENDIF.
    END-OF-DEFINITION.
    DEFINE IENDLOOP.
    ENDDO.
    END-OF-DEFINITION.
    DATA : T1 TYPE I, T2 TYPE I.
    SELECT * FROM MARC INTO TABLE T_OUTPUT.
    SORT T_OUTPUT BY WERKS.
    GET RUN TIME FIELD T1.
    ILOOP T_OUTPUT WERKS '0001'.
    WRITE : / T_OUTPUT-MATNR, T_OUTPUT-WERKS.
    IENDLOOP.
    GET RUN TIME FIELD T2.
    T2 = T2 - T1.
    WRITE : / T2.
    *C----
    But sadly it takes more time than a normal loop with
    where condition.
    can anyone suggest some ideas to improve execution speed?

    You can use a binary read to get the first record. Then read each record sequentially until you have proccessed all the records that meet your critera. You will have to have the internal table sorted so that the binary search and subsequent reads will work:
          READ TABLE itab WITH KEY
            field = whatever
            BINARY SEARCH.
          IF sy-subrc = 0.
            itab_index = sy-tabix.
            DO.
              IF sy-subrc = 0.
                IF itab_data-field = whatever
                  itab_index = itab_index + 1.
                  READ TABLE itab_data INDEX itab_index.
                ELSE.
                  EXIT.
                ENDIF.
              ELSE.
                EXIT.
              ENDIF.
            ENDDO.
          ENDIF.
    Rob

  • Regarding Secondary Index in a Table

    hi
    if i create a secondary index in a table is it obligatory or optional to have first field as MANDT (Client field) if the table is client dependent & how many secondary indexes(MAXIMUM) can be created for a table.
    Regards

    Hi,
    Check the below Link
    How to transport a secondary index on P master data table?
    Hope this helps you.
    Regards,
    Anki Reddy

  • Local index vs global index in partitioned tables

    Hi,
    I want to know the differences between a global and a local index.
    I'm working with partitioned tables about 10 millons rows and 40 partitions.
    I know that when your table is partitioned and your index non-partitioned is possible that
    some database operations make your index unusable and you have tu rebuid it, for example
    when yo truncate a partition your global index results unusable, is there any other operation
    that make the global index unusable??
    I think that the advantage of a global index is that takes less space than a local and is easier to rebuild,
    and the advantage of a local index is that is more effective resolving a query isn't it???
    Any advice and help about local vs global index in partitioned tables will be greatly apreciatted.
    Thanks in advance

    here is the documentation -> http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#sthref2570
    In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage. When deciding what kind of partitioned index to use, you should consider the following guidelines in order:
    1. If the table partitioning column is a subset of the index keys, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 2.
    2. If the index is unique, use a global index. If this is the case, you are finished. If this is not the case, continue to guideline 3.
    3. If your priority is manageability, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 4.
    4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index.
    Kind regards,
    Tonguç

  • How to shift index in a table

    Hey,
    My program collects 5 pieces of information per object from user. Some of the input data is in String format, rest are Integers. I create an object from this data and put in into a table. My table contains max 10 pieces of information.
    I have editing options for this data, which are erase one or all pieces, or modify a piece. I erase data by simply setting the index to null. Erasing one piece is incomplete atm, here's why:
    Say I have 5 pieces in table, and I erase the one with index of 0. I can't print my information because the table starts with a null value:
    int index = 0;
    if (table[index] != null) { System.out.println(data);} ..
    My guess is that I can solve this problem by decreasing the index of pieces greater than the modified one by 1. I can imagine that I'll be using for loop here. This is where I run out of ideas: is it possible to just modify the index, or do I have to rewrite the data again?
    Any help is appreciated,
    br,
    nomi

    I think using for loop would be the easiest way (?).
    I'm filling the table as simply as:
    items[index] = new table (value, quality..);Using LinkedList seems a bit far fetched for me. But
    again, if I am to use for loop, I think I would have
    to rewrite the data to appropriate index?Why is a List far fetched? Type of List you pick depends on how you use the list (whether you want RandomAccess, etc.).
    List items = new ArrayList(); // or LinkedList
    items.remove(index); // Remove the item at index.To use an array:
    Table [] items = new Table[20]; // whatever size
    items[index] = new Table(value, quality, ...);
    // To remove:
    System.arraycopy(items, index, items, index+1, items.length-(index+1));
    items[items.length-1] = null;

  • How to get selected  row index  of a Table ?

    hi gurus,I'm new  to Webdynpro for abap
    I'm displaying    just Flight details in a Table  so
    how to get selected  row index  of a  Table  and need  to be display in Message manager.

    Hi,
    For getting the row index use the following code.
    DATA lo_nd_node TYPE REF TO if_wd_context_node.
      DATA lo_el_node TYPE REF TO if_wd_context_element.
      DATA index TYPE i.
    * navigate from <CONTEXT> to <NODE> via lead selection
      lo_nd_node = wd_context->get_child_node( name = wd_this->wdctx_node ).
      lo_el_node = lo_nd_node->get_lead_selection(  ).
      index = lo_el_node->get_index( ).
    node is the name of the node which is binded to the table.
    For printing the message u can use code wizard.
    Press ctrl-F7. Now Select generate message.
    IN this select the method  REPORT_SUCCESS
    In the code now u can give index to Message text Exporting parameter. Comment receiving parameter.
    Write the whole code in onLeadSelect of the table.
    Regards,
    Pankaj Aggarwal

Maybe you are looking for