To Partition a 4NF table or convert to 2NF?

Hi,
In one of our Applications we have a fairly simple table that holds details about users. To keep things simple this is created as a simple triple of userid,propertyname,propertyvalue.
So...
4th normal form
CREATE OR REPLACE TABLE user_metadata
user_id VARCHAR2(20 CHAR) NOT NULL ,
parameter_name VARCHAR2(5 CHAR) NOT NULL ,
parameter_value VARCHAR2(20 CHAR) NOT NULL
This has worked well for the typically small number of users and properties.
We now have a larger proposition with 20-50 million users and 15-50 properties; not all properties are known yet; and for those properties that are known we do not know the distribution of values within a key.
So the question we have is do we change to use 2NF and make use of (lots of) bitmapped indexes. e.g.
CREATE OR REPLACE TABLE user_metadata
(user_id VARCHAR2(20 CHAR) NOT NULL ,
zone_name VARCHAR2(50 CHAR) ,
age NUMBER(3,0) ,
gender CHAR(1) ,
profession_description ...
,town_name
,etc
Or do we stick with 4NF and make use of partitioning based on the value of the parameter_name column?
From a product viewpoint we would prefer to keep with 4NF as it means no application changes for us to make, but we need to advise the client as to the best way to structure the table for performance.
I am not a practicing DBA, but I am the person who will be put on a plane to the client if this solution does not perform ;-) so I have some, possibly rtfm questions:
* Can we index within a partition?
** For example if one partition_name is "City" then we would have a partition that contains all rows which represent City. Can I then index on the parameter_value column of the City partition? Does finding all users in NY involve scanning the City partition?
* Is the additional storage required for 4NF likely to have a significant impact on performance; there is considerable duplication of storage with the 4NF table.
* Which will perform better?
* Which will be more complex to manage as the users grow from 20M to 50M?
Thanks in advance,
Karl.

Thinking about this some more I think you really have two separate requirements. One is an operational requirement to store this User Metadata or Attributes, and let this be edited or added for new users easily. The other is for analytical queries that combine these Attributes in different combinations:
* send a message to all users in NY over 40 who favorite sport is Football
* offer a discount to a user whos birthday is this week.
The exact list of requirements is not detailed by the client yet, but will be along the lines of "use any of the parameters to create a set of users".You seem to have a working solution to the operational side using the 4 NF triples - any set of attributes of a user can be easily represented and stored. What you do not have is a solution to the analytical query side. As already mentioned querying this metadata table directly involves some horrendous SQL and will probably perform poorly due to lack of indexes etc.
I think the best solution is to have two separate sets of tables in the database. I cannot see why the analytical queries have to run against the current, up to date data. Surely these analytical queries would be just as valid against yesterdays data, or even the day before that or the week before. The change in a very small percentage of the metadata should not materially affect the kind of queries you are on about.
If I am right, then you can also have your flattened out 2 NF table, with each attribute as a named data column, of the correct data type. And as you say, you can have Bit Mapped indexes on each of the columns, which will work well in the analytical type queries you have described - lots of 'and's of separate conditions.
What you need is a way of propagating the data from the 4 NF table to the 2 NF table once a day, or at whatever frequency you want. The crudest option is to always recreate the data, copying it all across. Of course performance will be slow, and get worse as the data set grows. So you really want a way of tracking the changes to the 4 NF metadata, and then only propagating the changes over to the 2 NF copy. There are several ways you could achieve this.
Materialized Views would be worth exploring, as they seem to do most of this already. You could also create your own staging tables for the changes, and use a trigger on the 4 NF table to populate the staging table on any change. Then use the staging table to update the 2NF data set at your desired frequency, and clear the staging table (delete records). Of course you would have to write your own SQL for this, but your table structures are relatively simple and straightforward.
I think there are several benefits to this approach:
* Separate data structures optimized to each type of requirement
* Good indexing on 2 NF data for good query performance - Bit Mapped Indexes
* Updates to 2 NF done separately from OLTP updates to 4 NF data - no contention on locking of Bit Mapped Indexes
* No performance impact on 4 NF data sets and the OLTP updates to these
* Minimal if any changes to your existing code on the 4 NF tables
I think you might have some data consistency and integrity issues though, which you need to be careful of. Do you have any logic that enforces the rule that all metadata attributes must be present for each user? Do you somehow enforce that each user has an 'age' or 'city' or 'birth date'. The reason is that when you take the separate rows of triples and merge them together to form one wide row of many columns you might end up missing some of the data attributes for some of the users. This would result in a NULL being stored in the 2 NF table. Is this what you want?
Unfortunately this lack of data integrity is really inherent in the 4 NF form of separate triples. You cannot have a data integrity constraint on individual rows that will enforce that each user must have a value for all parameter names. And you can only check that a user has values for all parameters after the data rows have been stored for the other parameters. Normal data integrity constraints can be checked and enforced before a data row is stored i.e. a single new data row can have all its columns checked for NULLs, and all foreign keys can be checked, during the INSERT before Oracle physically adds the row to the table. Your type of data integrity is across a set of data rows, which you would have to handle somehow else.
Just some more thoughts,
John

Similar Messages

  • Problem in truncate/drop partitions in a table having nested table columns.

    Hi,
    I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it help?
    Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
    Thanks in advance.

    >
    I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table?
    >
    Unfortunately you can't do those operations when a table has a nested table column. No truncate, no drop, no exchange partition at the partition level.
    A nested table column is stored as a separate table and acts like a 'child' table with foreign keys to the 'parent' table. It is these 'foreign keys' that prevent the truncation (just like normal foreign keys prevent truncating partions and must be disabled first) but there is no mechanism to 'disable' them.
    Just one excellent example (there are many others) of why you should NOT use object columns at all.
    >
    IF I change column types from nested table to varray type, will it help?
    >
    Yes but I STRONGLY suggest you take this opportunity to change your data model to a standard relational one and put the 'child' (nested table) data into its own table with a foreign key to the parent. You can create a view on the two tables that can make data appear as if you have a nested table type if you want.
    Assuming that you are going to ignore the above advice just create a new VARRAY type and a table with that type as a column. Remember VARRAYs are defined with a maximum size. So the number of nested table records needs to be within the capacity of the VARRAY type for the data to fit.
    >
    Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
    >
    Sure - just CAST the nested table to the VARRAY type. Here is code for a VARRAY type and a new table that shows how to do it.
    -- new array type
    CREATE OR REPLACE TYPE ARRAY_T AS VARRAY(10) OF VARCHAR2(64)
    -- new table using new array type - NOTE there is no nested table storage clause - arrays stored inline
    CREATE TABLE partitioned_table_array
         ( ID_ INT,
          arra_col  ARRAY_T )
         PARTITION BY RANGE (ID_)
         ( PARTITION p1 VALUES LESS THAN (40)
         , PARTITION p2 VALUES LESS THAN(80)
         , PARTITION p3 VALUES LESS THAN(100)
    -- insert the data from the original table converting the nested table data to the varray type
    INSERT INTO PARTITIONED_TABLE_ARRAY
    SELECT ID_, CAST(NESTED_COL AS ARRAY_T) FROM PARTITIONED_TABLENaturally since there is no more nested table storage you can truncate or drop partitions in the above table
    alter table partitioned_table_array truncate partition p1
    alter table partitioned_table_array drop partition p1

  • Partitioning (range) a table values less than 'A'

    i am referring
    http://docs.oracle.com/cd/B10501_01/server.920/a96524/c12parti.htm
    http://docs.oracle.com/cd/B19306_01/server.102/b14220/partconc.htm
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE     10.2.0.1.0     Production"
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    create table drop_it as select * from mv_prod_search_det2;     
    CREATE TABLE DROP_IT_P(
    PROD_DETAILS VARCHAR2(1000 BYTE),
         SIGN VARCHAR2(42 BYTE)
    PARTITION BY RANGE(PROD_DETAILS)
    PARTITION MAX_VALUE VALUES LESS THAN (MAXVALUE)
    update drop_it set prod_details=upper(prod_details);     
    72000 rows updated
    ALTER TABLE drop_it_p EXCHANGE PARTITION MAX_VALUE WITH TABLE drop_it WITH VALIDATION;     
    select * from mv_prod_search_det2
    72000 rows selected
    exec dbms_stats.gather_database_stats;
    select * from drop_it_p partition(max_value)
    ALTER TABLE DROP_IT_P
      SPLIT PARTITION MAX_VALUE AT ('B%')
      INTO (PARTITION p_a,
            PARTITION MAX_VALUE);     
    select * from drop_it_p partition(p_a);
    6785 rows selected
    select * from drop_it_p partition(p_a) where prod_details not like 'A%'     
    696 rows selectedit even shows me values that start with W,V,I,1,2,3,4,24,5 etc
    although the number is less(696out of 6785) this is undesired
    please help me eliminate these rows
    thank you
    this thread is related to tuning regexp_like by author 946207
    please refer
    tuning regexp_like
    and partitioning a table by 946207
    Edited by: 946207 on Dec 2, 2012 1:52 AM
    Edited by: 946207 on Dec 2, 2012 11:02 PM

    First, when you post related threads you should cross-link them so people have access to all of the information about the problem you are trying to work with.
    partitioning a table
    >
    it even shows me values that start with W,V,I,1,2,3,4,24,5 etc
    although the number is less(696out of 6785) this is undesired
    >
    Yes - that is what it should be doing.
    These are the steps you took to populate the table
    1. You originally inserted ALL data into table 'drop_it' with no restriction on the PROD_DETAILS values.
    create table drop_it as select * from mv_prod_search_det2;     2. Then you converted the PROD_DETAILS value to upper case. That has no effect on numbers or other non-alphabetic characters.
    update drop_it set prod_details=upper(prod_details);3. Then you create a new table with only one partition using MAXVALUe
    PROD_DETAILS VARCHAR2(1000 BYTE),
         SIGN VARCHAR2(42 BYTE)
    PARTITION BY RANGE(PROD_DETAILS)
    PARTITION MAX_VALUE VALUES LESS THAN (MAXVALUE)
    );4. Then you populate the partitioned table by exchange. It now has the same data including the numeric data.
    ALTER TABLE drop_it_p EXCHANGE PARTITION MAX_VALUE WITH TABLE drop_it WITH VALIDATION;     5. Then you split the one MAXVALUE partition into two partitions. One with data < 'B%' and one with the remaining data that sorts higher based on your character set.
    ALTER TABLE DROP_IT_P
      SPLIT PARTITION MAX_VALUE AT ('B%')
      INTO (PARTITION p_a,
            PARTITION MAX_VALUE);     The split on 'B%' when creating partition p_a is equivalent to you 'WITH VALUES < 'B%'. Since PROD_DETAILS is a VARCHAR2 datatype that 'LESS THAN' comparison uses the character order based on your database character set and most, if not all, character sets have characters that sort lower than the uppercase alphabetic characters.
    For example in the ASCII character set an uppercase 'A' is decimal 65 so 64 other characters (including the digitis 0-9) sort lower than 'A'.
    http://www.asciitable.com/
    As the doc you cited shows
    >
    •All partitions, except the first, have an implicit lower bound specified by the VALUES LESS THAN clause on the previous partition.
    >
    That 'first' partition has no lower bound so ALL data, including digits, that sort less than 'B%' will be in that partition.
    >
    please help me eliminate these rows
    >
    Either don't select the data to begin with or remove it using a simple DELETE query. Also you can do the case conversion when you select the data.
    create table drop_it as select upper(prod_details) prod_details, sign from mv_prod_search_det2 where upper(prod_details >= 'A';Before you do that you should make sure you define the actual business rule you want to use to define the data you really want to keep and exclude.
    Because most, if not all, character sets also have characters that sort HIGHER than the alphabetic characters. That ASCII table shows five of them. If you don't filter them out you will get data where the values start with those characters.
    Even if you do filter them out there is nothing in what you posted that would prevent a user from inserting that data back into the table.
    And, of course, there are characters that sort BETWEEN the lower and upper case alphabetics.
    You need to determine what the allowable characters are in the PROD_DETAILS column and add code (e.g. check constraint or trigger) to make sure users can't enter data that includes those characters.

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Maximum No. of Partitions in a Table?

    Hi,
    What is the Maximum no.of partitions in a table?
    Best Regards,
    Naresh Kumar C.

    All resides in what is 1K ?
    First option (more popular) : 1K=1024, so 1024K = 1024*1024
    Second option (less popular) : 1K = 1000, 1024K = 1024*1000
    http://en.wikipedia.org/wiki/Byte
    Have you really need to 1024K-1 partitions (either one or other option, it's already huge for number of partitions) ? Good luck for maintenance tasks...
    Nicolas.

  • Diagnostic pack, Tuning pack are not in OEM 10g, Add partition to a table

    Hi All,
    I have 2 questions:
    Q.1: In Oracle 9i Oracle Enterprise Manager java console, we had "Diagnostic Pack" and "Tuning Pack" which helped us seeing performance tuning info (session's statistics such as cpu time, pga memory, physical disk reads, etc.) and privded help for sql tuning and performance improvements. But in 10g, the same product (Oracle Enterprise Manager java console) does not include these 2 packs, due to which we are unable to monitor and control performance tuning information/statistics. Now my question is, do we need to install these 2 packs separately in 10g? if yes, where can we get them? (I am sure in 9i, these packages came with OEM console and we didnt need to get them separately)
    Q.2: I have a partitioned table having 5 partitions based on range partitioning. Now our requirements have changed and we need to insert values beyong the 5th partition, so we need a 6th partition. Can someone tell me the command to add a new partition to the table? I tried "Alter table xxx add partition yyy ....", but it didn't work. If any one can me the correct syntax, it will be great.
    Thanks in advance.

    OP is talking about java-based EM, not web-based DBConsole. In fact he/she has to change to DBConsole, because 10g java EM doesn't longer support these tuning features.
    Alter table ... partition syntax depends on the kind of partitioning, see the documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131048
    Werner

  • Table partitioning (intervel partitioning) on existing tables in oracle 11g

    Hi i'm newbie to table partitioning. I'm using 11g. I have table of size 32 gb (which has 22 million records) and i want to apply interval partition on that table. I created a empty table with a partition having columns same as source table and take dump of the source table and import into the new partition table. can you please suggest how to import table dump into new table? also is there any other better idea to do the same.

    Hi,
    imp user/password file=exp.dmp ignore=y
    The ignore=y causes the import to skip the table creation and continues to load all rows.
    On the other hand, you can insert data into the partitioned table with a subquery from the non-partitioned table such as follows;
    insert into patitioned_table
    select * from original_table;
    Hope it helps,

  • How to create partition from one table to another?

    Hi,
    Can any one help me in this :
    I have copied table from one schema to another by using the following command. The table is around 300 GB in size and the partitions are not copied. Now i got a request to create partitions in the destination table similar to the source table.
    CREATE TABLE user2.table_name AS SELECT * FROM  user1.table_name;
    I am using Oracle 9i database. This is very urgent, could any one please suggest me how can i create the partitions in  user2 table as user1 table.

    If you have the TOAD/ SQL DEVELOPER  , get the partition DDL from the source table and execute it in the newly created table.
    or you can use DBMS_METADATA:
    SET LONG 10000
    SELECT dbms_metadata.get_ddl('TABLE', 'TEST')
    FROM DUAL;
    Thanks,
    <Moderator Edit - deleted link signature - pl see FAQ link on top right>

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Use multiple partitions on a table in query

    Hi All,
    Overview:-
    I have a table - TRACK which is partitioned on weekly basis. Im using this table in one of my SQL queries in which I require to find a monthly count of some column data. The query looks like:-
    Select  count(*)
    from Barcode B
    inner join Track partition P(99) T
        on B.item_barcode = T.item_barcode
    where B.create_date between 20120202 and 20120209;In the above query I am fetching the count for one week using the Partitions created on that table during that week.
    Desired output:-
    I want to fetch data between 01-Feb and 01-Mar and use the rest of the partitions for that table during the duration in the above query. The weekly partitions currently present for Track table are -
    P(99) - 20120202
    P(100) - 20120209
    P(101) - 20120216
    P(102) - 20120223
    P(103) - 20120301
    My question is that above Ive used one partition successfully, now how can I use the other 4 partitions in the same query if I am finding the count for one month (i.e. from 201201 to 20120301) ?
    Environment:-
    Oracle version - Oracle 10g R2 (10.2.0.4)
    Operating System - AIX version 5
    Thanks.
    Edited by: Sandyboy036 on Mar 12, 2012 10:47 AM

    I'm with damorgan on this one, though I was lazy and only read it twice.
    You've got a mix of everything in this one and none of it is correct.
    1. If B.create_date is VARCHAR2 this is the wrong way to store dates.
    2. All Track partitions are needed for one month if you only have the 5 partitions you listed so there is no point in mentioning any of them by name. So the answer to 'how can I use the other 4 partitions' is - don't; Oracle will use them anyway.
    3. BETWEEN 01_Feb and 01-Mar - be aware that the BETWEEN operator includes both endpoints so if you actually used it in your query the data would include March 1.

  • Partitioning the fact table

    Hi Gurus,
    I have a question regarding partitioning the cube. When you partition the cube from the Extras menu, will it partition the F table or is it the E table or both.
    Second question: After partitioning, how will i know the newly created table names.
    Thanks,
    ANU

    Hi Anu,
    Partition Need and its definition
    Infocube contains huge amount of data n the table size of the cube increases regularly. so
    when a query is executed on cube then it has to check entire table to get the records for
    example Sales in Jan08.
    Advantage of Partition
    so we can partition the cube so that smaller tables are formed .so that report performance
    will increase bcoz the query hits that particular partition.
    Steps for Partition
    1.To partiotion a cube, it must not contain data.
    2. Partition can be done using time characteristics 0CALMONTH and fiscalperiod.
    steps:
    1. in change of the cube, select extras menu and Partioning option.
    2. select calmonth time characteristic.
    3. it will ask for the time period to partiotion and no. of partiotions. give them
    4. activate the cube.
    In BI 7 we can partition the cube even if it contains data.
    select the cube, right click , select repartitioning.
    1. we can delete existing partitions
    2. create new ones
    3. merge partitions.
    Partitioning of the Cube
    http://help.sap.com/saphelp_nw04s/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
    Partitioning Fields and Values
    partition of Infocube
    Partitioning of cube using Fiscal period
    Infocube Partition
    After Partition
    You can find the Partition in the following tables in SE11 >
    E tables /BIC/E* or /BIC/E(cube name)
    Please also go through the following links
    Partioning of Cube
    partioning
    Partioning of ODS object
    /thread/733456 [original link is broken]
    Hope i had answered your question
    Assign points if helpful,
    Thanks and regards
    Bala

  • How to create a partition on existing table?

    Hey
    Could some one please tell me on how to create a partition on existing table?

    Could some one please tell me on how to create a partition on existing table?
    You can't - that isn't possible. Unless a table is already partitioned you can NOT create another partition on it.
    You must either redefine the table as a partitioned table (using the DBMS_REDEFINITION package) or create a new partitioned table and move the data to its new partitions.
    The choice will depend on how much data the existing table has and whether you can do it offline.

  • Partitioning on exisitng table

    All,
    I wanted to do partition on existing table which does not have any partition before.
    My concern is it is a crucial table which is holding millions of records and ofcourse records are being inserted in a regualr basis.
    So if i create a new partition on this table , will it cause any datalose or any other impacts.
    This is table is having foreign key references too.
    Oracle 10g on Solaris 9
    Your suggesstions are highly appreciateable..
    Thanks in Advance

    What version? 10g is a marketing label not a version
    SELECT * FROM v$version;What kind of partitioning? RANGE? HASH? LIST? COMPOSITE?
    And you can not turn a normal heap table into a partitioned table. You need to create a partitioned table and then do an "exchange."
    I would suggest you read the docs and then ask your question after you provide appropriate detail information.
    But, as a blanket statement, if you do it properly you can not lose data.

  • Partition on MTL_SYSTEM_ITEMS_B table?

    Hi,
    MTL_SYSTEM_ITEMS_B table has 20 crore rows. Many of the concurrent programs accessing MTL_SYSTEM_ITEMS_B are also taking a very long time to run.
    I want to partition MTL_SYSTEM_ITEMS_B. Does oracle have any recommendation on what key to use and the type of partition?
    Are there any open issues with partitioning this table?
    Thanks

    Hi,
    See this thread.
    Table partitioning on MTL_SYSTEM_ITEMS_B table?
    Re: Table partitioning on MTL_SYSTEM_ITEMS_B table?
    Thanks,
    Hussein

  • Possible to swap multiple partitions into a table?

    Hi,
    We are using partition exchnage to swap individual partitions into table which then backed up.
    This being done one partition at a time.
    Is it possible to swap several partitions of a tabel in one go.
    using Oracle 11.2.0.3
    partioned by date, one partition of reach day.
    Is it possible say to move the last 7 days partitions into the other table for backup using partition exchange?
    Thanks

    >
    We are using partition exchnage to swap individual partitions into table which then backed up.
    This being done one partition at a time.
    Is it possible to swap several partitions of a tabel in one go.
    >
    No.
    If the goal is to back up the data why not just use expdp to export the data for all seven partitions at once? Then drop the partitions.
    If you only use one regular table for the exchange you would have to start with an empty table, swap one partition, backup the table, truncate the table, swap the next partition and so on.
    Or you could create a table with seven empty partitions and swap the 7 partitions one at a time and then backup the new partitioned table.
    Or you could create seven tables and swap each one with a partition and then backup all seven tables.
    Too many choices.

Maybe you are looking for

  • Tried every trick on Google- Itunes 64-bit Windows 7 not working

    Here is the error message I am receiving from the diagnostic test: Microsoft Windows 7 x64 Home Premium Edition Service Pack 1 (Build 7601) Hewlett-Packard HP Pavilion dv6 Notebook PC iTunes 10.7.0.21 QuickTime 7.7.2 FairPlay 2.2.19 Apple Application

  • DVD Player Hangs

    Since Leopard upgrade a DVD that plays fine on a regular player will not play on iMac. It plays for a while then hangs while a message says "skipping over damaged area". Had to power off to get out of DVD Player. Coould not get to force quit.

  • I rented a movie about 20 min ago, and it quit unexpectedly and now i cannot find it reloading itunes?

    i want my money back. i rented a movie and it quit and now i cant find it

  • Lion install questions

    Hi, I Have A Couple of questions about installing lion, first can you run snow leopard apps? and second, i have root enabled in snow leopard, onced installed will i have to reenable it? thanks!

  • Unable to read data in lsmw

    Hi Experts Iam getting a problem in lsmw when i try to uplaod leave encashments to 0416 infotype,in the step read data iam getting following message. Transactions Read:                    0 Records Read:                         0 Transactions Written