Two 'blunt' questions on Table Partitioning ?

Version: 10.2, 11.2
I wasn't fortunate enough to work in Partitioning in my DBA career. So , I have 2 basic questions.
Question1.
What is the most common form of partitioning you have come across ?
Question2.
Generally, would you prefer creating a global index or a local index for a partitioned table ?
I mean: Global Index for particular type of partitioning or a particular column data type...etc

>
I wasn't fortunate enough to work in Partitioning in my DBA career. So , I have 2 basic questions.
Question1.
What is the most common form of partitioning you have come across ?
Question2.
Generally, would you prefer creating a global index or a local index for a partitioned table ?
I mean: Global Index for particular type of partitioning or a particular column data type...etc
>
Sounds like interview questions to me. What answers to you give to those questions?
If you are new to partitioning I suggest you review the entire VLDB and Partitioning Guide
http://docs.oracle.com/cd/B28359_01/server.111/b32024/toc.htm
That doc covers all aspects of partitioning including the different types of partitioning and types of indexes and when to use each of them.
Partitioning should only be used when it solves a demonstrated problem that can't be solved using traditional techniques.
Common forms are RANGE, LIST, HASH, and composites of those. REF partitioning is also used as needed.
The type of index to use depends on the data and the reason partitioning was used to begin with. Global indexes can degrade the performance of partition maintenance.

Similar Messages

  • Question about table partitioning...

    Hello, all.  I'm using SQL 2012 Enterprise.  I have 8 very large tables (the largest two having 227M and 118M records, and the others between 11M-44M records).  For performance reasons, I'm considering partitioning the tables across multiple
    files/filegroups.  For my largest table (227M records), the data is spread across years 2011, 2012, and 2013 with 2013 having 104M records.  So naturally I'm considering partitioning on a Date column.  My question is should I go with four partitions
    (2011, 2012, 2013 and 2014 for new data) and still end up with a very large aggregation of data on the 2013 partition (104M) or should I further breakdown the 2013 partition into months now having 12 partitions for 2013 alone, and then I'm OK with all
    of 2011 and 2012 on their own partitions.  Again, this is for one table.  I'd still like to partition the other 7 large tables.  In the end, I could end up with many, many partitions and hence many, many filegroups.  I'm interested in how
    others partition MULTIPLE large tables.  Can you share partition functions/schemes across tables?
    Any thoughts, your own personal experiences, etc would be greatly appreciated.  Also, can someone recommend a good book, article, blog, etc on partitioning large databases.
    Thanks much in advance.
    Roz

    If you query against more than one partition I have doubts  you will gain performance...
    -- Create partition functions
    CREATE PARTITION FUNCTION PF1(INT) AS RANGE RIGHT FOR VALUES (1, 2, 3);
    CREATE PARTITION FUNCTION PF2(INT) AS RANGE RIGHT FOR VALUES (1, 2);
    -- Create filegroups
    ALTER DATABASE testdb ADD FILEGROUP FG7;
    ALTER DATABASE testdb ADD FILEGROUP FG6;
    ALTER DATABASE testdb ADD FILEGROUP FG5;
    ALTER DATABASE testdb ADD FILEGROUP FG4;
    ALTER DATABASE testdb ADD FILEGROUP FG3;
    ALTER DATABASE testdb ADD FILEGROUP FG2;
    ALTER DATABASE testdb ADD FILEGROUP FG1;
    -- Create partition schemes
    CREATE PARTITION SCHEME PS1 AS PARTITION PF1
    TO (FG1, FG2, FG3, FG4);
    CREATE PARTITION SCHEME PS2 AS PARTITION PF2
    TO (FG5, FG6, FG7);
    CREATE VIEW [dbo].[partition_info] 
    AS
    SELECT
    DB_NAME() AS 'DatabaseName'
    ,OBJECT_NAME(p.OBJECT_ID) AS 'TableName'
    ,p.index_id AS 'IndexId'
    ,CASE
    WHEN p.index_id = 0 THEN 'HEAP'
    ELSE i.name
    END AS 'IndexName'
    ,p.partition_number AS 'PartitionNumber'
    ,prv_left.value AS 'LowerBoundary'
    ,prv_right.value AS 'UpperBoundary'
    ,ps.name as PartitionScheme
    ,pf.name as PartitionFunction
    ,CASE
    WHEN fg.name IS NULL THEN ds.name
    ELSE fg.name
    END AS 'FileGroupName'
    ,CAST(p.used_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'UsedPages_MB'
    ,CAST(p.in_row_data_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'DataPages_MB'
    ,CAST(p.reserved_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'ReservedPages_MB'
    ,CASE
    WHEN p.index_id IN (0,1) THEN p.row_count
    ELSE 0
    END AS 'RowCount'
    ,CASE
    WHEN p.index_id IN (0,1) THEN 'data'
    ELSE 'index'
    END 'Type'
    FROM sys.dm_db_partition_stats p
    INNER JOIN sys.indexes i
    ON i.OBJECT_ID = p.OBJECT_ID AND i.index_id = p.index_id
    INNER JOIN sys.data_spaces ds
    ON ds.data_space_id = i.data_space_id
    LEFT OUTER JOIN sys.partition_schemes ps
    ON ps.data_space_id = i.data_space_id
    LEFT OUTER JOIN sys.partition_functions pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.destination_data_spaces dds
    ON dds.partition_scheme_id = ps.data_space_id
    AND dds.destination_id = p.partition_number
    LEFT OUTER JOIN sys.filegroups fg
    ON fg.data_space_id = dds.data_space_id
    LEFT OUTER JOIN sys.partition_range_values prv_right
    ON prv_right.function_id = ps.function_id
    AND prv_right.boundary_id = p.partition_number
    LEFT OUTER JOIN sys.partition_range_values prv_left
    ON prv_left.function_id = ps.function_id
    AND prv_left.boundary_id = p.partition_number - 1
    WHERE
    OBJECTPROPERTY(p.OBJECT_ID, 'ISMSSHipped') = 0
    AND p.index_id IN (0,1)
    GO
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Question on table partition

    Hi Again OTN Peeps,
    I have a question about copying from one table to another table.
    The table that i need to copy is partitioned.
    I'm checking the internet on how to copy data of the table per partition. But I'm not able to see a better explanation.
    Can anyone help me and guide me on how to copy table data per partition?
    Thanks Again!

    Found a way :)
    Thanks all

  • Index Vs table partition

    I have table whose growth is 1 million per month and may increase in future. I currently place an index on column which is frequently uses in where clause. there is another column which contains months so it may possible that I make 12 partitions of that. I want to know what is suitable. is there any connection between index and table partition?
    Message was edited by:
    user459835

    I think the question is more of what type of queries are answered by this table?
    is it that most of the times the results returned span across several months?
    is there any relation to the column you use in where clause with the data belonging to a particular month (or range there-of)?

  • Long running table partitioning job

    Dear HANA grus,
    I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
    Total data volumn is around 340GB and the table size was 32GB !!!!!
    (migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
    Before partitioning, the data volumn of the table was around 32GB.
    After partitioning, the size has changed to 25GB.
    It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
    (It is QA DB, so less complaints)
    I thought that I might not can do this in the production DB.
    Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
    Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
    Any comments would be appreciate.
    Cheers,
    - Jason

    Jason,
    looks like we're cross talking here...
    What was your rationale to partition the table in the first place?
           => To reduce deleting time of CDPOS            (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
    Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
    As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
    Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
    The deletion then should run along these 4 sets of 25% of data.
    It's surely me, but where is the speedup potential here?
    How many unloads happened during the re-partitioning?
           => It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
    I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
    How do the now longer running SQL statements look like?
           => As i mentioned selecting/deleting increased almost twice.
    That's not what I asked.
    Post the SQL statement text that was taking longer.
    What are the three columns you picked for partitioning?
           => mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
    Why those? Because these are the primary key?
    I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
    In that case the partition pruning cannot work and all partitions have to be searched.
    How did you come up with 4 partitions? Why not 13, 72 or 213?
           => I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
    Alright, so basically that was arbitrary.
    For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
    Well, not sure what "most people" would do.
    HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
    - Lars

  • How to write two triggers on same table how it works?

    Hello sir..
    I have to write two triggers on same table for auditing different columns of different pages (may be different modules).
    I will have an audit table in which i will insert data such as (user_id,module_id,column_name,old_col_val,new_col_ val,timestamp)
    Now different users from different pages will update the data on same table may be same columns!
    If we write directly, we will not be able to know which column is updated from different pages.
    My question is how can we control the triggers to raise based on the pages

    A trigger is executed whenever the table is inserted / updated / deleted (depend on trigger definition). It won't know what 'page' caused the operation. You can prepare a trigger for one page.
    In order to fulfill your need, you need some way to tell the trigger where you are. There are many ways to accomplish this. Some possible methods are (please check the documents for detail)
    DBMS_SESSION.SET_IDENTIFIER
    DBMS_APPLICATION_INFO.SET_MODULEFor example, you can call DBMS_SESSION.SET_IDENTIFIER to set an ID from your page, and then call sys_context to read the ID back:
    In Page:
    exec dbms_session.set_identifier('Page1');
    ...In Trigger
    pageid  := sys_context('USERENV', 'CLIENT_IDENTIFIER') ;
    ...Note that if you use a connection pool, you may need to properly reset the session information before return, in order to avoid messing up the session information when the connection is used next time.

  • Two triggers on same table

    Hi Friend.
    I have to write two triggers on same table for auditing different columns of different users(may be different modules).
    I will have an audit table in which i will insert data such as (user_id,module_id,column_name,old_col_val,new_col_val,timestamp)
    Now different users from different modules will update the data on same table may be same columns from different front end forms!
    If we write directly, we will not be able to know which column is updated by which user.
    My question is in this case how can we control the triggers to raise differently?

    You can use WHEN clause to fire a trigger only when some condition is true - you can check an user also,
    look at simple example:
    - suposse we have two users US1 and US2:
    C:\>sqlplus sys as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on Pn Gru 6 13:14:22 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> create user us1 identified by us1;
    User created.
    SQL> create user us2 identified by us2;
    User created.
    SQL> grant connect, resource to us1, us2;
    Grant succeeded.
    SQL> grant create public synonym to us1, us2;
    Grant succeeded.and suposse we have a table with three columns + audit table:
    SQL> connect us1
    Enter password:
    Connected.
    SQL> create table tab123(
      2  col1 number,
      3  col2 number,
      4  col3 number);
    Table created.
    SQL> create table audit_tab123(
      2  username varchar2(100),
      3  col1 number,
      4  col2 number,
      5  col3 number );
    Table created.
    SQL>  grant select, update, insert on tab123 to us2;
    Grant succeeded.
    SQL>  grant select, update, insert on audit_tab123 to us2;
    Grant succeeded.
    SQL> create public synonym tab123 for tab123;
    Synonym created.
    SQL> insert into tab123 values( 1, 1, 1 );
    1 row created.
    SQL> commit;
    Commit complete.We want a trigger that is fired only by user US1 and only after update of COL1 and COL2
    (COL3 is ignored):
    SQL> connect us1/us1
    Connected.
    SQL> CREATE OR REPLACE TRIGGER Trig_123_US1
      2  AFTER UPDATE OF col1, col2 ON tab123
      3  FOR EACH ROW
      4  WHEN ( user = 'US1' )
      5  BEGIN
      6    INSERT INTO audit_tab123( username, col1, col2 )
      7    VALUES( user, :new.col1, :new.col2 );
      8 END;
    SQL> /
    Trigger created.And we want a second trigger that is fired only by user US2 and only after update of COL2 and COL3
    (COL1 is ignored):
    SQL> connect us1/us1
    Connected.
    SQL> CREATE OR REPLACE TRIGGER Trig_123_US2
      2  AFTER UPDATE OF col2, col3 ON tab123
      3  FOR EACH ROW
      4  WHEN ( user = 'US2' )
      5  BEGIN
      6    INSERT INTO audit_tab123( username, col2, col3 )
      7    VALUES( user, :new.col2, :new.col3 );
      8  END;
      9  /
    Trigger created.and now let test our triggers:
    SQL> connect us1/us1
    Connected.
    SQL> update tab123 set col1 = 22;
    1 row updated.
    SQL> update tab123 set col2 = 22;
    1 row updated.
    SQL> update tab123 set col3 = 22;
    1 row updated.
    SQL> commit;
    Commit complete.
    SQL> select * from audit_tab123;
    USERNAME         COL1       COL2       COL3
    US1                22          1
    US1                22         22
    SQL> connect us2/us2
    Connected.
    SQL> update tab123 set col1 = 333;
    1 row updated.
    SQL> update tab123 set col2 = 333;
    1 row updated.
    SQL> update tab123 set col3 = 333;
    1 row updated.
    SQL> commit
      2  ;
    Commit complete.
    SQL> select * from us1.audit_tab123;
    USERNAME         COL1       COL2       COL3
    US1                22          1
    US1                22         22
    US2                          333         22
    US2                          333        333As you see, each trigger is fired only once, first triger only for user US1 and columns COL1 and COL2,
    and second trigger only for user US2 and only after update of COL2 and COL3.
    I hope this will help.

  • Table Partition on daily basis in oracle 10g

    I Want to create partition based on sysdate on daily basis.
    There will be 8 partitions. Every day data's will be get loaded into this table and everyday 8th day old data ill be get truncated.
    CREATE TABLE CUST_WALLET_BALANCE_7DAYS
    ( ID  VARCHAR2(250),
       A_DATE  VARCHAR2(11),
       LAST_PROCESS_DATE DATE,
      DD_OF_PROCESS_DATE  NUMBER(2),
      CONSTRAINT CUST_WALLET_BALANCE_7DAYS_PK PRIMARY KEY (ID,A_DATE))
      PARTITION BY RANGE (DD_OF_PROCESS_DATE)
      ( PARTITION DAY1 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE)),'DD')),
        PARTITION DAY2 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-1)),'DD')),
        PARTITION DAY3 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-2)),'DD')),
        PARTITION DAY4 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-3)),'DD')),
        PARTITION DAY5 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-4)),'DD')),
        PARTITION DAY6 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-5)),'DD')),
        PARTITION DAY7 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-6)),'DD')),
        PARTITION DAY8 VALUES LESS THAN (TO_NUBER(TO_CHAR(TRUNC(SYSDATE-7)),'DD'))
    THIS WONT WORKS OUT. SO PLEASE SUGGEST ME WITH BETTER SOLUTION.

    Original thread here: Table Partition on daily basis in oracle 10g
    Please do not start duplicate questions for the same topic.
    Locking this thread

  • Table Partitioning in Oracle 9i

    Hi all,
    I have a question on partitioning in Oracle 9i.
    I have a parent table with primary key A1 and attribute A2. A2 is not a primary key but I would to create partition for the table based on this attribute. I have a child table with attribute B1 being a foreign key to A1.
    I wish to perform a data purging on the parent and child table. I'll purge the parent table based on A2, but for the child table, it will be inefficient if I delete all records in child table where parent.A1 = child.B1. Should I add a new attribute A2 to the child table, partition the child table based on this attribute or is there a better way to do it?
    Thanks in advance for all replies.
    Cheers,
    Bernard

    Bernard
    Right 100K in the parent...but how many in the child ?
    I guess it comes back to what I said earlier...you can either take the hit on the cascaded delete to get out the records on the child table or you can denormalise the column down onto the child table in order to partition by it.
    I'm building a Data Warehouse currently and we're using the denormalise approach on a couple of tables in order to allow them to be equipartitioned and enable easier partition management and DML operations as you've indicated....but our tables have 100's of millions of rows in them so we really need to do that for manageability.
    100K records in the parent - provided the ratio to the child is not such that on average each deleted parent has 100's of children is probably not too onerous, especially for a monthly batch process - the question there would be how much time do you have to do this at the end of the month ? I'd probably suggest you set up a quick test and benchmark it with say 10K records as a representative sample (can do all 100K if you have time/space) - then assess that load/time against your month end window....if its reasonably quick then no need to compromise your design.
    You should also consider whether the 100K is going to remain consistent over time or is it going to grow rapidly in which that would sway you towards adding the denormalisation for partitioning approach at the outset.
    HTH
    Jeff

  • What is needed for sorting on two fields in a table control

    Hi Everybody
    I am going to certification in a couple of days. I need some help and was hopeing that you  guys could help.
    What is needed for sorting on two fields in a table control?
         One sorted table and two processing blocks
         Two standard tables and one processing blocks
         Two standard tables and two processing
    Which one is corret??
    //Script

    Hi Kimallan
    I am not sure what is meant by a "processing block". However, it seems the question wants the original table order to be preserved. If so; as far as I understood the problem we need:
    itab_proxy[] = itab_main[] .
    "two standard tables"
    SORT itab_proxy BY field1 field2 .
    If we have a sorted table, then it is always sorted by its keys. So, the question seems to become obsolete for that option.
    Hope I've understood correct...
    Regards
    *--Serdar
    [email protected]

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Introduction of Oracle Table Partitions into PeopleSoft HRMS environment

    I would like to pose a general question and see if anyone has found any published advice / suggestions from PeopleSoft or Oracle on this. I believe that Oracle table partitioning isn't supported through the PeopleTools application designer functionality. Most likely this is done for platform independence. However, we were thinking about implementing table partitioning for performance and the ability to refresh test instances with subsets worth of data instead of the entire database.
    I know that this would be a substantial effort, but was wondering if anyone had any documentation on this type of implemention. I've read some articles from David Kurtz on the subject, and it sounds like these were all custom jobs for each individual client. Was looking for something more generic on this practice from PeopleSoft or Oracle...
    Regards,
    Jay

    Thanks for the article Nicolas. I will add that to my collection, good reference piece.
    I think you gasped the gist of the query, which was I know that putting partitioning into a PeopleSoft application is going to be highly specific to the client and application that you are running. But what I looking for was something like a baseline guide for implementing partitioning in a PeopleSoft application as a whole.
    In other words, something like notifiaction that the application designer panels would be affected since they don't have the ability to manage partitions. Therefore, any changes to tables that would utilize partitioning would need to be maintained at the database level and no longer utilize the DDL generated from PeopleTools Application Designer. Other consideration would be, like maybe a list of tables that would be candidates for partitioning based on the application, in my case HRMS. And maybe, suggestions on what column should be used for partitioning, etc...All of which are touched on in your identified article about putting partitioning in at the database level for a generic application.
    Thanks for your help, it is much appreciated...
    Jay

  • Suggestions for table partition for existing tables.

    I have a table as below. This table contains huge data. This table has so many child tables .I am planning to do a 'Reference Partitioning' for the same.
    create table PROMOTION_DTL
      PROMO_ID              NUMBER(10) not null,
      EVENT                 VARCHAR2(6),
      PROMO_START_DATE      TIMESTAMP(6),
      PROMO_END_DATE        TIMESTAMP(6),
      PROMO_COST_START_DATE TIMESTAMP(6),
      EVENT_CUT_OFF_DATE    TIMESTAMP(6),
      REMARKS               VARCHAR2(75),
      CREATE_BY             VARCHAR2(50),
      CREATE_DATE           TIMESTAMP(6),
      UPDATE_BY             VARCHAR2(50),
      UPDATE_DATE           TIMESTAMP(6)
    alter table PROMOTION_DTL
      add constraint PROMOTION_DTL_PK primary key (PROMO_ID);
    alter table PROMOTION_DTL
      add constraint PROMO_EVENT_FK foreign key (EVENT)
      references SP_PROMO_EVENT_MST (EVENT);
    -- Create/Recreate indexes
    create index PROMOTION_IDX1 on PROMOTION_DTL (PROMO_ID, EVENT)
    create unique index PROMOTION_PK on PROMOTION_DTL (PROMO_ID)
    -- Grant/Revoke object privileges
    grant select, insert, update, delete on PROMOTION_DTL to SCHEMA_RW_ROLE;I would like to partition this table .Most of the queries contains the following conditions.
    promo_end_date >=   SYSDATE
    and
    (event = :input_event OR
    (:input_Start_Date <= promo_end_date           
    AND promo_start_date <= :input_End_Date))Any time the promotion can be closed by updating the PROMO_END_DATE.
    Interval partioning on PROMO_END_DATE is not possible as PROMO_END_DATE is a nullable and updatable field.
    I am now to table partition.
    Any suggestions are welcome...

    DO NOT POST THE SAME QUESTION IN MULTIPLE FORUMS PLEASE!
    Suggestions for table partition of existing tables

  • Foreign keys at the table partition level

    Anyone know how to create and / or disable a foreign key at the table partition level? I am using Oracle 11.1.0.7.0. Any help is greatly appreciated.

    Hmmm. I was under the impression that Oracle usually ignores indices on columns with mostly unique and semi-unique values and prefers to do full-table scans instead on the (questionable) theory that it takes almost as much time to find one or more semi-unique entries in an index with a billion unique values as it does to just scan through three billion fields. Though I tend to classify that design choice in the same category as Microsoft's design decision to start swapping ram out to virtual memory on a PC with a gig of ram and 400 megs of unused physical ram on the twisted theory that it's better to make the user wait while it needlessly thrashes the swapfile NOW than to risk being unable to do it later (apparently, a decision that has its roots in the 4-meg win3.1 era and somehow survived all the way to XP).

  • Custom Design rules on table partitions

    Hi
    I need to create several custom design rule at the table partition level.
    for example one of the rule is that
    for all table partitions
      if a table partition name begins with M
        then it should not be compressed
        and also should not be in tablespace called xyzHow do i go about enforcing this rule using the design rules

    Hi,
    here is simple example, you can improve it easily. In fact you have two rules and it's better to create two rules.
    var ruleMessage;
    var errType;
    var table;
    //define the function
    function checkPartitions(){
    ruleMessage = "";
    model = table.getDesignPart();
    tp = model.getStorageDesign().getStorageObject(table.getObjectID());
    result = true;
    if(tp!=null){
      partitions = tp.getPartitions().toArray();
      for(var i=0;i<partitions.length;i++){
       partition = partitions;
    if(partition.getName().startsWith("M") && "YES".equals(partition.getDataSegmentCompression())){
    result = false;
    ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be compressed";
    break;
    tablespace = partition.getTableSpace();
    if(tablespace!=null && "xyz".equals(tablespace.getName()) && partition.getName().startsWith("M")){
    result = false;
    ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be in tablespace xyz";
    break;
    return result;
    //call the function
    checkPartitions();
    you should define it for "Table" object. And your physical model should be open.
    Philip
    Edited by: Philip Stoyanov on Jan 10, 2012 4:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • RMAN Backupset information needed for REFRESH

    Hi, Just need some information on following while doing RMAN Refresh from PROD to Test in remote server with different file structure. 1)If i have RMAN cold backup(datafiles backupset) of PROD(Archivelog Mode) do i need to have archivelog backupset f

  • Can't stat /dev/disk1s2: No such file or directory

    I get this error when running Disk Utility on my MacBook Pro. Google says nothing about this error. Any ideas? Thanks, Al

  • Lost the results panel for packages

    There was a time when I had a results panel that displayed any compile errors for packages. Somehow I've lost this highly useful panel and cannot seem to located how to restore it. The panel displays fine for SQL but does not display for any packages

  • Cannot see my ATV in iTunes?

    I just restored my ATV2 to factory firmware and it does not show up in iTunes? Is there some new place in iTunes to look for or see my ATV2? Everything is up to date. Yosemite iTunes ATV2 TIAS

  • Does the 3GS get the photo button on the lock screen?

    I thought the update included a photo button on the lock screen. Did anyone else get it on their 3GS?