Subpartitioning on index organized table in Oracle 10g

We have an indexorganized table with primary key on 3 columns one of which is a DATE column. We need to partition the table on the DATE column month wise and subpartition it further week wise.
Please let me know if it is possible and if yes, how?

Composite Partitioning
Composite partitioning partitions data using the range method, and within each
partition, subpartitions it using the hash method. Composite partitions are ideal
for both historical data and striping. Composite Partitioning also provides
improved manageability of range partitioning and data placement, as well as the
parallelism advantages of hash partitioning.
When creating composite partitions, you specify the following:
Partitioning method: range
Partitioning column(s)
Partition descriptions identifying partition bounds
Subpartitioning method: hash
Subpartitioning column(s)
Number of subpartitions per partition or descriptions of subpartition
Example:
CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER)
PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname)
SUBPARTITIONS 8 STORE IN (ts1, ts3, ts5, ts7)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
Each subpartition of a composite-partitioned table is stored in its own segment.
The partitions of a composite-partitioned table are logical structures only as
their data is stored in the segments of their subpartitions. As with partitions,
these subpartitions share the same logical attributes. Unlike range partitions
in a range partitioned table, the subpartitions cannot have different physical
attributes from the owning partition except for the Tablespace attribute.
Composite Partitions Indexes
Composite partitioning method supports both Local and global indexes.
The following statement creates a local index on the EMP table where the index
segments will be spread across tablespaces TS7, TS8, and TS9.
CREATE INDEX emp_ix ON emp(deptno)
LOCAL STORE IN (ts7, ts8, ts9);
This local index will be equipartitioned with the base table as follows:
It will consist of as many partitions as the base partitioned table.
Each index partition will consist of as many subpartitions as the corresponding
base table partition.
Index entries for rows in a given subpartition of the base table will be stored
in the corresponding subpartition of the index.
Refer METALINK doc Note:125314.1

Similar Messages

  • Finding Parent Table of Index Organized Table

    I am using Oracle 10g Rel 2 and faced error message while granting select priviliges ORA-25191: cannot reference overflow table of an index-organized table
    I searched solution and found: Issue the statement against the parent index-organized table containing the specified overflow table.
    Question is how can i find parent index-organized table containing the specified overflow table.
    Further i faced an other error while granting select priviliges : ORA-22812: cannot reference nested table column's storage table
    Solution is same and i found parent table and first granted select priviliges on that parent table and then tried then tried to grant select priviliges on required table to public and did not worked.
    if some body could help me!!!!!!!!!!!!!!!!!!!!!

    And just what does this problem have to do with either the SQL and/or PL/SQL languages?
    Try the [url http://forums.oracle.com/forums/forum.jspa?forumID=61&start=0]Community Discussion Forums » Database » Database - General forum.
    Also suggest that you read up on just what an IOT is, and where and when overflows apply. See the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref1060]Oracle® Database Concepts guide.

  • Creation of context index on index-organized table

    I encountered a problem when creating a domain index(intermediate text context index) on a index-organised table in oracle 8i.
    The description of the error is stated below:
    "ORA-29866: cannot create domain index on a column of index-organized table "
    I have configured intermediate text properly and even it worked for those tables which are not index-organised(ordinary tables).
    This problem has occured only when i made the tables as index organised.
    Please provide us a solution to this problem as early as possible.
    In case if you require any more details i shall provide them.

    Please ask questions about Oracle Text (formerly interMedia text) in the Oracle Text forum. You will get a quicker, more expert answer there.

  • Creation of context index on index-organized tables

    I encountered a problem when creating a domain index(intermediate text context index) on a index-organised table in oracle 8i.
    The description of the error is stated below:
    "ORA-29866: cannot create domain index on a column of index-organized table "
    I have configured intermediate text properly and even it worked for those tables which are not index-organised(ordinary tables).
    This problem has occured only when i made the tables as index organised.
    Please provide us a solution to this problem as early as possible.
    In case if you require any more details i shall provide them.

    creation of domain indexes (such as context) on iot's
    is not currently supported in oracle.

  • Index Organized Tables

    what is logical rowid in IOT?are they stored somwhere physically just like physical rowId's
    what are secondary indexes?
    what it means by leaf block splits?when and how it happens?
    and the primary key constraint for an index-organized table cannot be dropped, deferred, or disabled,,,,,Is it true,,,,,if Yes Then Y
    how does overflow works?how the two clauses are implemented PCTTHRESHOLD and INCLUDING.how they work?
    Edited by: Juhi on Oct 22, 2008 1:09 PM

    I'm sort-of tempted to just point you in the direction of the official documentation (the concepts guide would be a start. See http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#sthref759)
    But I would say one or two other things.
    First, physical rowids are not stored physically. I don't know why you'd think they were. The ROWID data type can certainly be used to store a rowid if you choose to do so, but if you do something like 'select rowid from scott.emp', for example, you'll see rowids that are generated on-the-fly. ROWID is a pseudo-column, not physically stored anywhere, but computed whenever needed.
    The difference between a physical rowid and a logical one used with IOTs comes down to a bit of relational database theory. It is a cast-iron rule of relational databases that a row, once inserted into a table, must never move. That is, the rowid it is assigned at the moment of its first insertion, must be the rowid it 'holds onto' for ever and ever. If you ever want to change the rowids assigned to rows in an ordinary table, you have to export them, truncate the table and then re-insert them: fresh insert, fresh rowid. (Oracle bends this rule for various maintenance and management purposes, whereby 'enable row movement' permits rows to move within a table, but the general case still applies mostly).
    That rule is obviously hopeless for index structures. Were it true, an index entry for 'Bob' who gets updated to 'Robert' would find itself next to entries for 'Adam' and 'Charlie', even though it now has an 'R' value. Effectively, a 'b' "row" in an index must be allowed to "move" to an 'r' sort of block if that's the sort of update that takes place. (In practice, an update to an index entry consists of performing a delete followed by a re-insert, but the physicalities don't change the principle: "rows" in an index must be allowed to move if their value changes; rows in a table don't move, whatever happens to their values)
    An IOT is, at the end of the day, simply an index with a lot more columns in it than a "normal" index would have -so it, too, has to allow its entires (its 'rows', if you like) to move. Therefore, an IOT cannot use a standard ROWID, which is assigned once and forever. Instead, it has to use something which takes account of the fact that its rows might wander. That is the logical rowid. It's no more "physical" than a physical rowid -neither are physically stored anywhere. But a 'physical' rowid is invariant; a logical one is not. The logical one is actually constructed in part from the primary key of the IOT -and that's the main reason why you cannot ever get rid of the primary key constraint on the IOT. Being allowed to do so would equate to allowing you to destroy the one organising principle for its contents that an IOT possesses.
    (See the section entitled "The ROWID Pseudocolumn" and following on this page: http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1845
    So IOTs have their data stored in them in primary key order. But they don't just contain the primary key, but every other column in the 'table definition' too. Therefore, just like with an ordinary table, you might want sometimes to search for data on columns which are NOT part of the primary key -and in that case, you might well want these non-primary key columns to be indexed. Therefore, you will create ordinary indexes on these columns -at this point, you're creating an index on an index, really, but that's a side issue, too! These extra indexes are called 'secondary indexes', simply because they are 'subsidiary indexes' to the main one, which is the "table" itself arranged in primary key order.
    Finally, a leaf block split is simply what happens when you have to make room for new data in an index block which is already packed to the rafters with existing data. Imagine an index block can only contain four entries, for example. You fill it with entries for Adam, Bob, Charlie, David. You now insert a new record for 'Brian'. If this was a table, you could throw Brian into any new block you like: data in a table has no positional significance. But entries in an index MUST have positional significance: you can't just throw Brian in amongst the middle of a lot of Roberts, Susans and Tanyas. Brian HAS to go in between the existing entires for Bob and Charlie. Yet you can't just put him in the middle of those two, because then you'd have five entries in a block, not four, which we imagined for the moment to be the maximum allowed. So what to do? What you do is: obtain a new, empty block. Move Charlie and David's entries into the new block. Now you have two blocks: Adam-Bob and Charlie-David. Each only has two entries, so each has two 'spaces' to accept new entries. Now you have room to add in the entry for Brian... and so you end up with Adam-Bob-Brian and Charlie-David.
    The process of moving some index entries out of one block into a new one so that there's room to allow new entries to be inserted in the middle of existing ones is called a block split. They happen for other reasons, too, so this is just a gloss treatment of them, but they give you the basic idea. It's because of block splits that indexes (and hence IOTs) see their "rows" move: Charlie and David started in one block and ended up in a completely different block because of a new (and completely unrelated to them) insert.
    Very finally, overflow is simply a way of splitting off data into a separate table segment that wouldn't sensibly be stored in the main IOT segment itself. Suppose you create an IOT containing four columns: one, a numeric sequence number; two, a varchar2(10); three, a varchar2(15); and four, a blob. Column 1 is the primary key.
    The first three columns are small and relatively compact. The fourth column is a blob data type -so it could be storing entire DVD movies, multi-gigabyte-sized monsters. Do you really want your index segment (for that is what an IOT really is) to balloon to huge sizes every time you add a new row? Probably not. You probably want columns 1 to 3 stored in the IOT, but column 4 can be bumped off over to some segment on its own (the overflow segment, in fact), and a link (actually, a physical rowid pointer) can link from the one to the other. Left to its own devices, an IOT will chop off every column after the primary key one when a record which threatens to consume more than 50% of a block gets inserted. However, to keep the main IOT small and compact and yet still contain non-primary key data, you can alter these default settings. INCLUDE, for example, allows you to specify which last non-primary key column should be the point at which a record is divided between 'keep in IOT' and 'move out to overflow segment'. You might say 'INCLUDE COL3' in the earlier example, so that COL1, COL2 and COL3 stay in the IOT and only COL4 overflows. And PCTTHRESHOLD can be set to, say, 5 or 10 so that you try to ensure an IOT block always contains 10 to 20 records -instead of the 2 you'd end up with if the default 50% kicked in.

  • Problem with a 2 columns Range Partitioning for a indexed organized table

    have an indexed organized table with a 2 column PK. the first field (datum) is a date field the second field (installatieid) is a number(2) field.
    Every minute a 7 records are inserted (installatieid 0-6).
    I like to partition this table with one partition per year per installatieid.
    I tried to do it with:
    partition by range(datum,installatieid)
    (partition P_2004_0 values less than (to_date('2004-01,'yyyy-mm'),1)
    ,partition P_2004_6 values less than (to_date('2004-01','yyyy-mm'),7)
    partition P_2005_0 values less than (to_date('2005-01','yyyy-mm'),1)
    ,partition P_2005_6 values less than (to_date('2005-01','yyyy-mm'),7)
    but now only the P_2004_0 and P_2005_0 are filled.
    I thought about to combine a range partition on datum with a list subpartition on installatieid, but I read this is not allowed with an index organized table.
    How can I solve this problem.

    partition by range(datum,installatieid)
    (partition P_2004_0 values less than
    (to_date('2004-01,'yyyy-mm'))
    ,partition P_2004_6 values less than
    (to_date('2004-07','yyyy-mm'))
    partition P_2005_0 values less than
    (to_date('2005-01','yyyy-mm'))
    ,partition P_2005_6 values less than
    (to_date('2005-07','yyyy-mm'))
    ? Sorry haven't got time to test it this morning ;0)

  • Heap tables and index organized tables

    I performing migration from mssql server to oracle 10gr2 rdbms, in mssql all tables have clustered pk, index. Is it necessary to use index organized tables for that migration, or enough ordinal heap organized tables and what differences between those tables, and mssql tables
    Thanx

    In Oracle, the typical table is a standard 'heap' table. Stuff goes into the heap table randomly, and randomly comes out.
    An Index Organizaed Tables is somewhat similar to a Cluster Index in SS. It can have some performance advantages over heap tables - when the heap table has an associated index on the primary key.
    The IOT can also have some disadvantages, such as the need for an Overflow table to handle the extra data when a row doesn't conveniently fit in a block (implying multiple I/Os), and an extra translation table if bitmap indexes are required (implying extra I/Os).
    An unintelligent developer will generally believe that Oracle and SQL Server are the same - after all they both run SQL - and will attempt to port by a simple translation of syntax.
    An intelligent developer will test both styles of tables, during a port. Such a developer will also be quick to learn about the changes in internals (such as locking mechanisms) and will realize that different styles of coding are required for many application situations.
    I recommend reading Tom Kyte's books to get handle on pros and cons as well as testing techniques to help a developer become intelligent.

  • How to handle Index Organized Tables in schema capture?

    I’m trying setup streams schema level between 2 database source 10.2.0.4 and 11g; one way streams;
    The source has Index Organized Tables with column rowed; Oracle streams doesn’t support data type rowed;
    Is it works around?
    Thanks

    At the moment, you can't do it declaratively. You have to do it in an event handler. Assuming you have the userid setup as a query parameter in the view object, something like this should get you started:
    public EventResult handleEvent(
    BajaContext context, Page page, PageEvent event) throws Throwable
    HttpSession session = context.getServletRequest().getSession(true);
    ViewObject view = ServletBindingUtils.getViewObject(context);
    String userid = session.getAttribute("userid");
    view.setWhereClauseParam(0, userid);
    view.executeQuery();

  • Indexes on Index Organized Table

    I remember seeing a note regarding problems in using indexs on index organized tables. Of course, the index would be something other than the one implied in the table definition. I can't seem to remember what the problem was.
    Has anyone used indexes on index organized tables successfully? (unsuccessfully?)
    Oracle 8i on a Tru64 platform
    Thanks
    -dave

    I remember seeing a note regarding problems in using indexs on index organized tables. Of course, the index would be something other than the one implied in the table definition. I can't seem to remember what the problem was.
    Has anyone used indexes on index organized tables successfully? (unsuccessfully?)
    Oracle 8i on a Tru64 platform
    Thanks
    -dave

  • Primary key constraint for index-organized tables or sorted hash cluster

    We had a few tables dropped without using cascade constraints. Now when we try to recreate the table we get an error message stating that "name already used by an existing constraint". We cannot delete the constraint because it gives us an error "ORA-25188: cannot drop/disable/defer the primary key constraint for index-organized tables or sorted hash cluster" Is there some sort of way around this? What can be done to correct this problem?

    What version of Oracle are you on?
    And have you searched for the constraint to see what it's currently attached to?
    select * from all_constraints where constraint_name = :NAME;

  • Why Index-organized Table (IOT) is so slow during bulk/initial insert?

    Tested in 11.1.0.7.0 RAC on RHEL 5 with ASM and 16KB block size.
    Table is not wide: PK contains 4 columns and the leading 2 are compressed because they have relatively low cardinality; 2 other columns are included; the table contains another 4 audit columns; overflow table space defined.
    Created 2 tables, one is IOT, the other is a normal heap-organized table with "COMPRESS FOR ALL OPERATIONS". Both tables have been range partitioned by the first column into 8 partitions, and DOP is set to 8.
    Initial load volume is about 160M rows. Direct Path insert is used with parallel degree 8.
    After initial load, create PK for the 4 columns with the leading 2 compressed on the normal table. The IOT occupied about 7GB storage; the normal table occupied 9GB storage (avg_row_len = 80 bytes) and the PK occupied 5.8GB storage.
    The storage saving of IOT is significant, but it took about 60 minutes to load the IOT, while it only took 10 minutes to load the heap-organized table and then 6 minutes to create the PK. Overall, the bulk insert for IOT is about 4 times slower than the equivalent heap-organized table.
    I have ordered the 4 columns in PK for the best compression ratio (lower cardinality comes first) and only compress the most repetitive leading columns (this matches ORACLE's recommendation in index_stats after validate structure), partition is used to reduce contention, parallel degree is amble, /*+ append */ is used for insert, the ASM system is backed with high-end SAN with a lot of I/O bandwidth.
    So it seems that such table is good candidate for IOT and I've tried a few tricks to get the best out of IOT, but the insert performance is quite disappointing. Please advise me if I missed anything, or you have some tips to share.
    Thanks a lot.
    CREATE TABLE IOT_IS_SLOW
      GROUP_ID      NUMBER(2)                   NOT NULL,
      BATCH_ID      NUMBER(4)                  NOT NULL,
      KEY1              NUMBER(10)                    NOT NULL,
      KEY2              NUMBER(10)                NOT NULL,
      STATUS_ID         NUMBER(2)                   NOT NULL,
      VERSION           NUMBER(10),
      SRC_LAST_UPDATED      DATE,
      SRC_CREATION_DATE     DATE,
      DW_LAST_UPDATED   DATE,
      DW_CREATION_DATE  DATE,
      CONSTRAINT PK_IOT_IS_SLOW
      PRIMARY KEY (GROUP_ID, BATCH_ID, KEY1, KEY2)
    ORGANIZATION INDEX COMPRESS 2
    INCLUDING VERSION
    NOLOGGING
    PCTFREE 20
    OVERFLOW
    PARALLEL ( DEGREE 8 )
    PARTITION BY RANGE(GROUP_ID)
         PARTITION P01 VALUES LESS THAN (2),
         PARTITION P02 VALUES LESS THAN (3),
         PARTITION P03 VALUES LESS THAN (4),
         PARTITION P04 VALUES LESS THAN (5),
         PARTITION P05 VALUES LESS THAN (6),
         PARTITION P06 VALUES LESS THAN (7),
         PARTITION P07 VALUES LESS THAN (8),
         PARTITION P08 VALUES LESS THAN (MAXVALUE)
    );Even if /*+ APPEND */ is ignored for IOT, it is too slow, isn't it?

    David_Aldridge wrote:
    oftengo wrote:
    >
    Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND or APPEND_VALUES hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
    >
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm
    Hmmm, that's very interesting. I'm still a bit cynical though -- in order for direct path to work on an index organized table by appending blocks I would think that some extra conditions would have to be satisfied:
    * the table would have to be empty, or the lowest-sorting row of the new data would have to be higher than the highest-sorting row of the existing data
    * the data would have to be sorted
    ... that sort of thing. Maybe I'm suffering a failure of imagination though.Could be. From a Tanel Poder post:
    >
    The “direct path loader” (KCBL) module is used for performing direct path IO in Oracle, such as direct path segment scans and reading/writing spilled over workareas in temporary tablespace. Direct path IO is used whenever you see “direct path read/write*” wait events reported in your session. This means that IOs aren’t done from/to buffer cache, but from/to PGA directly, bypassing the buffer cache.
    This KCBL module tries to dynamically scale up the number of asynch IO descriptors (AIO descriptors are the OS kernel structures, which keep track of asynch IO requests) to match the number of direct path IO slots a process uses. In other words, if the PGA workarea and/or spilled-over hash area in temp tablespace gets larger, Oracle also scales up the number of direct IO slots. Direct IO slots are PGA memory structures helping to do direct IO between files and PGA.
    >
    So I'm reading into this that somehow these temp segments handle it, perhaps because with parallelism you have to be able to deal anyway. I speculate the data is inserted past the high water mark, then any ordering issues left can be resolved before moving the high water mark(s). Maybe examining where segments wind up in the data files can show how this works.
    >
    I can't find anything in the documentation that speaks to this, so I wonder whether the docs are really talking about a form of conventional path parallel insert into an IOT and not true direct path inserts.
    One way to check, I think, would be to get the wait events for the insert and see whether the writes are direct.

  • How to hide the data in particular table in oracle 10g

    How to hide the data in particular table in oracle 10g
    i want steps

    If its on Report u can  always hide the column - Keyfigure or Selection - Display - Hide......y do u want to have it on the report if it is to be hided in the first place?

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • How to provide access to  v$tables in oracle 10g to user

    Hi,
    can any one suggest me how to provide access to v$tables in oracle 10g to user .
    its requried for auditor.
    PLease help me.
    regards

    user12009184 wrote:
    HI have to provide access to all V$ tables
    it required for configuration of new tool.
    ThanksYou can grant it the select catalog role to the user. It should provide all the required access to the general v$* & dba_* views.
    GRANT SELECT_CATALOG_ROLE TO USER;
    Let me know if this helps.
    Regards,
    Rizwan

  • Replication of table from Oracle 10g to sql server 2000

    Could i replicate table from Oracle 10g to sql server online. we have tables with same configuration and if any change happen in oracle 10g or sql server in that table we need to replicate that change to other database.
    What is the solution for this two way replication between sql server and Oracle 10g

    But the tutorial is saying that i will have to install Oracle database on the server already having sql server, is it client or whole database, if it is then it will acquire lot of resource.
    I want to find out that for Heterogenous Service ODBC, we need third party software for ODBC Driver of SQL SERVER for Linux and secondly if we use Transparent Gateway then what are the steps for its configuration.
    I could not find steps of configuration of Transoparent gateway, when i am trying to install Transparent gateway from Universal installer, it is not there. where do i find it , Do i need to purchase it too.

Maybe you are looking for