NULL partition key in RANGE partition
All,
This is regarding partitioning a table using RANGE partition method. But the partition key contains null. How do I handle this situation? This is because there is no DEFAULT partition in RANGE partition though its present in LIST partition. Will rows with NULL partition key fall in MAXVALUE partition? Seeking your guidence.
Thanks,
...
NULLS would fit into the MAXVAL partition yes.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#sthref2590
Thanks
Paul
Similar Messages
-
Modify HUGE HASH partition table to RANGE partition and HASH subpartition
I have a table with 130,000,000 rows hash partitioned as below
----RANGE PARTITION--
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)(
PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
Data: -
INSERT INTO TEST_PART
VALUES ('2000',200001,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200009,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200010,'CM');
VALUES ('2006',NULL,'CM');
COMMIT;
Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
The table after restructuring should look like the one below
COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)
SUBPARTITION BY HASH (C_NBR) (
PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Pls advice
Thanks in advanceSorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
I have a table with 130,000,000 rows hash partitioned as below
----HASH PARTITION--
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY HASH (C_NBR)
PARTITIONS 2
STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Data: -
INSERT INTO TEST_PART
VALUES ('2000',200001,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200009,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200010,'CM');
VALUES ('2006',NULL,'CM');
COMMIT;
Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
The table after restructuring should look like the one below
COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)
SUBPARTITION BY HASH (C_NBR) (
PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Pls advice
Thanks in advance -
Unique key on range-partitioned table
Hi,
We are using a composite range-hash interval partitioned table
Uses index - trying to make this have same tablespace as the partitions i.e. local but not liking it
alter table RETAILER_TRANSACTION_COMP_POR
add constraint RETAILER_TRANSACTION_COMP_PK primary key (DWH_NUM)
using index
LOCAL
ora-14039: partitioning columns must form a subset of key columns of a unique index
Without local then fine but doesn't have same tablespace as the partitions and don't want to make this part of partition key.
Tbale range partitioned - this is just a UK to prevent duplicates[oracle@localhost ~]$ oerr ora 14039
14039, 00000, "partitioning columns must form a subset of key columns of a UNIQUE index"
// *Cause: User attempted to create a UNIQUE partitioned index whose
// partitioning columns do not form a subset of its key columns
// which is illegal
// *Action: If the user, indeed, desired to create an index whose
// partitioning columns do not form a subset of its key columns,
// it must be created as non-UNIQUE; otherwise, correct the
// list of key and/or partitioning columns to ensure that the index'
// partitioning columns form a subset of its key columns -
Creating Local partitioned index on Range-Partitioned table.
Hi All,
Database Version: Oracle 8i
OS Platform: Solaris
I need to create Local-Partitioned index on a column in Range-Partitioned table having 8 million records, is there any way to perform it in fastest way.
I think we can use Nologging, Parallel, Unrecoverable options.
But while considering Undo and Redo also mainly time required to perform this activity....Which is the best method ?
Please guide me to perform it in fastest way and also online !!!
-YasserYasserRACDBA wrote:
3. CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) LOCAL
NOLOGGING PARALLEL (DEGREE 14) online;
4. Analyze the table with cascade option.
Do you think this is the only method to perform operation in fastest way? As table contain 8 million records and its production database.Yasser,
if all partitions should go to the same tablespace then you don't need to specify it for each partition.
In addition you could use the "COMPUTE STATISTICS" clause then you don't need to analyze, if you want to do it only because of the added index.
If you want to do it separately, then analyze only the index. Of course, if you want to analyze the table, too, your approach is fine.
So this is how the statement could look like:
CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) TABLESPACE CS_BILLING LOCAL NOLOGGING PARALLEL (DEGREE 14) ONLINE COMPUTE STATISTICS;
If this operation exceeds particular time window....can i kill the process?...What worst will happen if i kill this process?Killing an ONLINE operation is a bit of a mess... You're already quite on the edge (parallel, online, possibly compute statistics) with this statement. The ONLINE operation creates an IOT table to record the changes to the underlying table during the build operation. All these things need to be cleaned up if the operation fails or the process dies/gets killed. This cleanup is supposed to be performed by the SMON process if I remember correctly. I remember that I once ran into trouble in 8i after such an operation failed, may be I got even an ORA-00600 when I tried to access the table afterwards.
It's not unlikely that your 8.1.7.2 makes your worries with this kind of statement, so be prepared.
How much time it may take? (Just to be on safer side)The time it takes to scan the whole table (if the information can't read from another index), the sorting operation, plus writing the segment, plus any wait time due to concurrent DML / locks, plus the time to process the table that holds the changes that were done to the table while building the index.
You can try to run an EXPLAIN PLAN on your create index statement which will give you a cost indication if you're using the cost based optimizer.
Please suggest me if any other way exists to perform in fastest way.Since you will need to sort 8 million rows, if you have sufficient memory you could bump up the SORT_AREA_SIZE for your session temporarily to sort as much as possible in RAM.
-- Use e.g. 100000000 to allow a 100M SORT_AREA_SIZE
ALTER SESSION SET SORT_AREA_SIZE = <something_large>;
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to calculate the partition size in range partition,by value
hi all,
The primary key is number for the table.
I have 5543201 records in a table.
If I want to break them in 7 equal partition as per the primary key,how do i achieve this?
rgds
sI probably don't understand it...but I would say:
5543201/7=791886
range 1 : 0....791886 (1x 791886)
range 2 : 791887....1583772 (2 x 791886)
range 3 : 1583773 .... (3 x 781886)
tange 4 : ......
HoweverI doubt whether this is a smart aproach for partitioning.
I would say that a patitioning on date, or some key-value is more applicable.
But hey...I don't know the further requirements.
Cheers Martijn -
ORA-14402: updating partition key column would cause a partition change
Hi,
When I am trying to execute an update statement where i am tring to update date values emp_det from 11-oct-2010 to 12-nov-2010.
Oracle throws an error :
ORA-14402
updating partition key column would cause a partition change
I think that this is because emp_det is a partitioning key of a partitioned table.
Oracle documentation says that
"UPDATE will fail if you change a value in the column that would move the
row to a different partition or subpartition, unless you enable row
movement" .
alter table t enable row movement;
I did not understand what is meant by "enable row movement".
I cannot drop the partitions and recreate it after updating the table and also i don't have proper priviliges for enale row movement syntax
because of the lack of privileges. How to solve this is issues with out row movement and recreate partition.
Can this be done by a developer or is there any other way to execute update in this case? its urgent.. pls help..
thanks in advance..
By
Sivaraman
Edited by: kn_sivaraman on Nov 1, 2010 2:32 AMkn_sivaraman wrote:
I did not understand what is meant by "enable row movement". Each partition in partitioned table is physically separate segment. Assume you have a row that belongs to partition A stored in segment A and you change row's partitioning column to value that belongs to partition B - you have an issue since updated row can't be now stored in segment A anymore. By default such update is not allowed and you get an error. You can enable row movement and Oracle will move row to target partition:
SQL> CREATE TABLE SALES_LIST(
2 SALESMAN_ID NUMBER(5,0),
3 SALESMAN_NAME VARCHAR2(30),
4 SALES_STATE VARCHAR2(20),
5 SALES_AMOUNT NUMBER(10,0),
6 SALES_DATE DATE
7 )
8 PARTITION BY LIST(SALES_STATE)
9 (
10 PARTITION SALES_WEST VALUES('California', 'Hawaii'),
11 PARTITION SALES_EAST VALUES('New York', 'Virginia', 'Florida'),
12 PARTITION SALES_CENTRAL VALUES('Texas', 'Illinois'),
13 PARTITION SALES_OTHER VALUES(DEFAULT)
14 )
15 /
Table created.
SQL> insert
2 into sales_list
3 values(
4 1,
5 'Sam',
6 'Texas',
7 1000,
8 sysdate
9 )
10 /
1 row created.
SQL> update sales_list
2 set sales_state = 'New York'
3 where sales_state = 'Texas'
4 /
update sales_list
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change
SQL> alter table sales_list enable row movement
2 /
Table altered.
SQL> update sales_list
2 set sales_state = 'New York'
3 where sales_state = 'Texas'
4 /
1 row updated.
SQL> SY. -
Best way to change partition key on existing table
Hi,
Using Oracle 11.20.3 on AIX.
We have a table about 800 million rows and 120gb in size.
Want to try copies oif this table to evalaute different partitiong strategies.
What is the quickest way to do this?
Would have liked say datapump table 1 tro disk and datapumo import the data to new table but do the tables need to be of the same format.
Thanks>
Using Oracle 11.20.3 on AIX.
We have a table about 800 million rows and 120gb in size.
Want to try copies oif this table to evalaute different partitiong strategies.
What is the quickest way to do this?
Would have liked say datapump table 1 tro disk and datapumo import the data to new table but do the tables need to be of the same format.
>
First your subject asks a different question that the text you posted: Best way to change partition key on existing table. The answer to that question is YOU CAN'T. All data has to be moved to change the partition key since each partition/subpartition is in its own segment. You either create a new table or use DBMS_REDEFINITION to redefine the table online.
Why do you want to export all data to a file first? That just adds to the time and cost of doing the op.
What problem are you trying to use partitioning to solve? Performance? Data maintenance? For performance the appropriate partitioning key and whether to use subpartitions depends on the types of queries and the query predicates you typically use as well as the columns that may be suitable for partition keys.
For maintenance a common method is to partition on a date by year/month/day so you can more easily load new daily/weekly/monthly data into its own partition or drop old data that no longer needs to be kept online.
You should use a small subset of the data when testing your partitionings strategies.
Can you do the partitioning offline in an outage window? If not then using the DBMS_REDEFINITION is your only option.
Without knowing what you are trying to accomplish only general advice can be given. You even mentioned that you might want to use a different set of columns than the curren table has.
A standard heap table uses ONE segment for its data (ignoring possible LOB segments). A partitioned/subpartitioned table uses ONE segment for each partition/subpartition. This means that ALL data must be moved to partition the table (unless you are only creating one partition).
This means that every partitioning scheme that uses a different partition key requires ALL data to be moved again for that test.
Provide some information about what problem you are trying to solve.
>
Is this quicker than datapump?
>
Yes - exporting the data simplying moves it all an additional time. Ok to export if you need a backup before you start.
>
Found artcle which talks about using merge option on datapump import to convert partitioned table to non-partitioned table.
>
How would that apply to you? That isn't what you said you wanted to do. -
Requirement:
Replace Interval partitioned Table by Range Partitioned Table
DROP TABLE A;
CREATE TABLE A
a NUMBER,
CreationDate DATE
PARTITION BY RANGE (CreationDate)
INTERVAL ( NUMTODSINTERVAL (30, 'DAY') )
(PARTITION P_FIRST
VALUES LESS THAN (TIMESTAMP ' 2001-01-01 00:00:00'));
INSERT INTO A
VALUES (1, SYSDATE);
INSERT INTO A
VALUES (1, SYSDATE - 30);
INSERT INTO A
VALUES (1, SYSDATE - 60);I need to change this Interval Partitioned Table to a Range Partitioned Table. Can I do it using EXCHANGE PARTITION. As if I use the conventional way of creating another Range Partitioned table and then :
DROP TABLE A_Range
CREATE TABLE A_Range
a NUMBER,
CreationDate DATE
PARTITION BY RANGE (CreationDate)
(partition MAX values less than (MAXVALUE));
Insert /*+ append */ into A_Range Select * from A; --This Step takes very very long..Trying to cut it short using Exchange Partition.Problems:
I can't do
ALTER TABLE A_Range
EXCHANGE PARTITION MAX
WITH TABLE A
WITHOUT VALIDATION;
ORA-14095: ALTER TABLE EXCHANGE requires a non-partitioned, non-clustered table
This is because both the tables are partitioned. So it does not allow me.
If I do instead :
create a non partitioned table for exchanging the data through partition.
Create Table A_Temp as Select * from A;
ALTER TABLE A_Range
EXCHANGE PARTITION MAX
WITH TABLE A_TEMP
WITHOUT VALIDATION;
select count(*) from A_Range partition(MAX);
-Problem is that all the data goes into MAX Partition.
Even after creating a lot of partitions by Splitting Partitions, still the data is in MAX Partition only.
So:
-- Is it that we can't Replace an Interval Partitioned Table by Range Partitioned Table using EXCHANGE PARTITION. i.e. We will have to do Insert into..
-- We can do it but I am missing something over here.
-- If all the data is in MAX Partition because of "WITHOUT VALIDATION" , can we make it be redistributed in the right kind of range partitions.You will need to pre-create the partitions in a_range, then exchange them one by one from a to a tmp then then to arange. Using your sample (thanks for proviing the code by the way).
SQL> CREATE TABLE A
2 (
3 a NUMBER,
4 CreationDate DATE
5 )
6 PARTITION BY RANGE (CreationDate)
7 INTERVAL ( NUMTODSINTERVAL (30, 'DAY') )
8 (PARTITION P_FIRST
9 VALUES LESS THAN (TIMESTAMP ' 2001-01-01 00:00:00'));
Table created.
SQL> INSERT INTO A VALUES (1, SYSDATE);
1 row created.
SQL> INSERT INTO A VALUES (1, SYSDATE - 30);
1 row created.
SQL> INSERT INTO A VALUES (1, SYSDATE - 60);
1 row created.
SQL> commit;
Commit complete.You can find the existing partitions form a using:
SQL> select table_name, partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'A';
TABLE_NAME PARTITION_NAME HIGH_VALUE
A P_FIRST TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P44 TO_DATE(' 2013-01-28 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P45 TO_DATE(' 2012-12-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P46 TO_DATE(' 2012-11-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAYou can then create table a_range with the apporopriate partitions. Note that you may need to create additional partitions in a_range because interval partitioning does not create partitions that it has no data for, even if that leaves "holes" in the partitioning scheme. So, based on the above:
SQL> CREATE TABLE A_Range (
2 a NUMBER,
3 CreationDate DATE)
4 PARTITION BY RANGE (CreationDate)
5 (partition Nov_2012 values less than (to_date('30-nov-2012', 'dd-mon-yyyy')),
6 partition Dec_2012 values less than (to_date('31-dec-2012', 'dd-mon-yyyy')),
7 partition Jan_2013 values less than (to_date('31-jan-2013', 'dd-mon-yyyy')),
8 partition MAX values less than (MAXVALUE));
Table created.Now, create a plain table to use in the exchanges:
SQL> CREATE TABLE A_tmp (
2 a NUMBER,
3 CreationDate DATE);
Table created.and exchange all of the partitions:
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p44
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION jan_2013
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p45
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION dec_2012
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p46
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION nov_2012
3 WITH TABLE A_tmp;
Table altered.
SQL> select * from a;
no rows selected
SQL> select * from a_range;
A CREATIOND
1 23-NOV-12
1 23-DEC-12
1 22-JAN-13John -
Find range partition key information
Hello, I try to insert a row in a table and I get this msg: "inserted partition key is beyond highest legal partition key". This table has a range partitioning key but I don't know on which colmn(s) this partioning is working.
Is there a way to find this information?
- which column
- which are the values
Thx in advance,
PascalLook at the following views, you should be able to find the information
USER_IND_PARTITIONS
USER_IND_SUBPARTITIONS
USER_LOB_PARTITIONS
USER_LOB_SUBPARTITIONS
USER_TAB_PARTITIONS
USER_TAB_SUBPARTITIONS -
Modifing range partition key values
I have table in oracle 10g with range partiotion , now I want to change or
modify the partition key values . How i can do thatDepending on the level of change???
If your just rejiging boundaries in the range then you can merge, re split partitions? Without specific information on the change its hard to guess what is in your mind.
I have used this method in the past when some of the partitions remain empty and I wish to rebalance the data skew to even up the distribution of a key? Is this what you want to do?
Or do you intend adding columns to a key without recreating the object? I don't know any way to change this without creating a new object; sorry…. If you have the space create table as select with new range specification would do. Otherwise create csv and sqlload into new defined object after drop and recreate. Hope this helps.
Kind regards -
TIMESTAMP(6) Partitioned Key - Range partitioned table ddl needed
What is DDL syntax for TIMESTAMP(6) Partitioned Key, Range partitioned table
Edited by: oracletune on Jan 11, 2013 10:26 AM>
What is DDL syntax for TIMESTAMP(6) Partitioned Key, Range partitioned table
>
Not sure what you are asking. Are you asking how to create a partitioned table using a TIMESTAMP(6) column for the key?
CREATE TABLE TEST1
USERID NUMBER,
ENTRYCREATEDDATE TIMESTAMP(6)
PARTITION BY RANGE (ENTRYCREATEDDATE) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
PARTITION P0 VALUES LESS THAN (TO_DATE('1-1-2013', 'DD-MM-YYYY'))
)See my reply Posted: Jan 10, 2013 9:56 PM if you need to do it on a TIMESTAMP with TIME ZONE column. You need to add a virtual column.
Creating range paritions automatically -
We have tables that are interval range partitioned on a DATE column, with a partition for each day - all very standard and straight out of Oracle doc.
A 3rd party application queries the tables to find number of rows based on date range that is on the column used for the partition key.
This application uses date range specified relative to current date - i.e. for last two days would be "..startdate > SYSDATE -2 " - but partition pruning does not take place and the explain plan shows that every partition is included.
By presenting the query using the date in a variable partition pruning does table place, and query obviously performs much better.
DB is 11.2.0.3 on RHEL6, and default parameters set - i.e. nothing changed that would influence optimizer behavior to something unusual.
I can't work out why this would be so. It very easy to reproduce with simple test case below.
I'd be very interested to hear any thoughts on why it is this way and whether anything can be done to permit the partition pruning to work with a query including SYSDATE as it would be difficult to get the application code changed.
Furthermore to make a case to change the code I would need an explanation of why querying using SYSDATE is not good practice, and I don't know of any such information.
1) Create simple partitioned table
CREATETABLE part_test
(id NUMBER NOT NULL,
starttime DATE NOT NULL,
CONSTRAINT pk_part_test PRIMARY KEY (id))
PARTITION BY RANGE (starttime) INTERVAL (NUMTODSINTERVAL(1,'day')) (PARTITION p0 VALUES LESS THAN (TO_DATE('01-01-2013','DD-MM-YYYY')));
2) Populate table 1million rows spread between 10 partitions
BEGIN
FOR i IN 1..1000000
LOOP
INSERT INTO part_test (id, starttime) VALUES (i, SYSDATE - DBMS_RANDOM.value(low => 1, high => 10));
END LOOP;
END;
EXEC dbms_stats.gather_table_stats('SUPER_CONF','PART_TEST');
3) Query the Table for data from last 2 days using SYSDATE in clause
EXPLAIN PLAN FOR
SELECT count(*)
FROM part_test
WHERE starttime >= SYSDATE - 2;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 8 | 7895 (1)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 8 | | | | |
| 2 | PARTITION RANGE ITERATOR| | 111K| 867K| 7895 (1)| 00:00:01 | KEY |1048575|
|* 3 | TABLE ACCESS FULL | PART_TEST | 111K| 867K| 7895 (1)| 00:00:01 | KEY |1048575|
Predicate Information (identified by operation id):
3 - filter("STARTTIME">=SYSDATE@!-2)
4) Now do the same query but with SYSDATE - 2 presented as a literal value.
This query returns the same answer but very different cost.
EXPLAIN PLAN FOR
SELECT count(*)
FROM part_test
WHERE starttime >= (to_date('23122013:0950','DDMMYYYY:HH24MI'))-2;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 8 | 131 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 8 | | | | |
| 2 | PARTITION RANGE ITERATOR| | 111K| 867K| 131 (0)| 00:00:01 | 356 |1048575|
|* 3 | TABLE ACCESS FULL | PART_TEST | 111K| 867K| 131 (0)| 00:00:01 | 356 |1048575|
Predicate Information (identified by operation id):
3 - filter("STARTTIME">=TO_DATE(' 2013-12-21 09:50:00', 'syyyy-mm-dd hh24:mi:ss'))
thanks in anticipation
JimAs Jonathan has already pointed out there are situations where the CBO knows that partition pruning will occur but is unable to identify those partitions at parse time. The CBO will then use a dynamic pruning which means determine the partitions to eliminate dynamically at run time. This is why you see the KEY information instead of a known partition number. This is to occur mainly when you compare a function to your partition key i.e. where partition_key = function. And SYSDATE is a function. For the other bizarre PSTOP number (1048575) see this blog
http://hourim.wordpress.com/2013/11/08/interval-partitioning-and-pstop-in-execution-plan/
Best regards
Mohamed Houri -
Our os is;
SunOS 5.9
and database is;
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit
Our autotrace outputs are below also we have 10046 trace outputs;
08:41:04 tcell_dev@SCME > set timing on
08:41:19 tcell_dev@SCME > set autot on
08:41:21 tcell_dev@SCME > SELECT lnpessv.PROFILE_ID FROM SCME.LNK_PROFILEENTITY_SUBSSERVVAR lnpessv
08:41:25 2 WHERE lnpessv.SUBSCRIPTION_SERVICEVARIANT_ID = 1695083 ;
PROFILE_ID
1.400E+14
1.600E+14
Elapsed: 00:00:03.07
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=3 Bytes=51)
1 0 PARTITION HASH (ALL) (Cost=3 Card=3 Bytes=51)
2 1 INDEX (RANGE SCAN) OF 'PK_PROFILEENTITY_SUBSSERVVAR' (INDEX (UNIQUE)) (Cost=
3 Card=3 Bytes=51)
Statistics
1 recursive calls
0 db block gets
1539 consistent gets
514 physical reads
0 redo size
258 bytes sent via SQL*Net to client
273 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
08:41:32 tcell_dev@SCME > SELECT lnpessv.PROFILE_ID FROM SCME.LNK_PROFILEENTITY_SUBSSERVVAR lnpessv
08:41:43 2 WHERE lnpessv.SUBSCRIPTION_SERVICEVARIANT_ID = 169508 ;
PROFILE_ID
1.400E+14
1.600E+14
Elapsed: 00:00:04.01
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=3 Bytes=51)
1 0 PARTITION HASH (ALL) (Cost=3 Card=3 Bytes=51)
2 1 INDEX (RANGE SCAN) OF 'PK_PROFILEENTITY_SUBSSERVVAR' (INDEX (UNIQUE)) (Cost=
3 Card=3 Bytes=51)
Statistics
1 recursive calls
0 db block gets
1537 consistent gets
512 physical reads
0 redo size
258 bytes sent via SQL*Net to client
273 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Here we see 97% wait time, and responce time is unexceptable; These are the waits from 10046 trace file;
WAIT #1: nam='gc cr grant 2-way' ela= 783 p1=341 p2=67065 p3=1 obj#=169530 tim=571610438395
WAIT #1: nam='db file sequential read' ela= 6924 file#=341 block#=67065 blocks=1 obj#=169530 tim=571610445466
WAIT #1: nam='gc cr grant 2-way' ela= 564 p1=294 p2=86263 p3=1 obj#=169531 tim=571610446493
WAIT #1: nam='db file sequential read' ela= 6629 file#=294 block#=86263 blocks=1 obj#=169531 tim=571610453158
INDEX RANGE SCAN PK_PROFILEENTITY_SUBSSERVVAR PARTITION: 1 512 (cr=1537 pr=512 pw=0 time=4272017 us)
This is the related tables properties;
OWNER SCME
TABLE_NAME LNK_PROFILEENTITY_SUBSSERVVAR
TABLESPACE_NAME DATA01
STATUS VALID
PCT_FREE 10
INI_TRANS 10
MAX_TRANS 255
INITIAL_EXTENT 65536
MIN_EXTENTS 1
MAX_EXTENTS 2147483645
LOGGING NO
BACKED_UP N
NUM_ROWS 239587420
BLOCKS 1587288
EMPTY_BLOCKS 0
AVG_SPACE 0
CHAIN_CNT 0
AVG_ROW_LEN 41
AVG_SPACE_FREELIST_BLOCKS 0
NUM_FREELIST_BLOCKS 0
DEGREE 1
INSTANCES 1
CACHE N
TABLE_LOCK ENABLED
SAMPLE_SIZE 71876226
LAST_ANALYZED 29.05.2006 23:21:24
PARTITIONED NO
TEMPORARY N
SECONDARY N
NESTED NO
BUFFER_POOL DEFAULT
ROW_MOVEMENT DISABLED
GLOBAL_STATS YES
USER_STATS NO
SKIP_CORRUPT DISABLED
MONITORING YES
DEPENDENCIES DISABLED
COMPRESSION DISABLED
DROPPED NO
We are suspecting rac configuration and hash partition and index usage with rac.
Any comments will be welcomed,
Thank you.
Tonguçthis is the output of dbms_metadata.get_ddl for the table;
CREATE TABLE "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR"
( "SUBSCRIPTION_SERVICEVARIANT_ID" NUMBER NOT NULL ENABLE NOVALIDATE,
"PROFILE_ID" NUMBER NOT NULL ENABLE NOVALIDATE,
"CREATED_BY_ID" NUMBER,
"CREATED_DATE" DATE DEFAULT SYSDATE,
"UPDATED_BY_ID" NUMBER,
"UPDATED_DATE" DATE,
CONSTRAINT "PK_PROFILEENTITY_SUBSSERVVAR" PRIMARY KEY ("SUBSCRIPTION_SERVICEVARIANT_ID", "PROFILE_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 4194304
BUFFER_POOL DEFAULT)
TABLESPACE "INDX02" GLOBAL PARTITION BY HASH ("SUBSCRIPTION_SERVICEVARIANT_ID","PROFILE_ID")
(PARTITION "SYS_P52989"
TABLESPACE "INDX02",
PARTITION "SYS_P52990"
TABLESPACE "INDX02",
PARTITION "SYS_P54010"
TABLESPACE "INDX02",
PARTITION "SYS_P54011"
TABLESPACE "INDX02",
PARTITION "SYS_P54012"
TABLESPACE "INDX02") ;
CREATE UNIQUE INDEX "SCME"."PK_PROFILEENTITY_SUBSSERVVAR" ON "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" ("SUBSCRIPTION_SERVICEVARIANT_ID", "PROFILE_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 4194304
BUFFER_POOL DEFAULT)
TABLESPACE "INDX02" GLOBAL PARTITION BY HASH ("SUBSCRIPTION_SERVICEVARIANT_ID","PROFILE_ID")
(PARTITION "SYS_P52989"
TABLESPACE "INDX02",
PARTITION "SYS_P52990"
TABLESPACE "INDX02",
PARTITION "SYS_P53499"
TABLESPACE "INDX02",
PARTITION "SYS_P53500"
TABLESPACE "INDX02") ENABLE NOVALIDATE,
CONSTRAINT "FK_LNK_PROF_REFERENCE_SDP_SUBS" FOREIGN KEY ("SUBSCRIPTION_SERVICEVARIANT_ID")
REFERENCES "SCME"."SDP_SUBSCRIPTIONSERVICEVARIANT" ("SUBSCRIPTION_SERVICEVARIANT_ID") DEFERRABLE INITIALLY DEFERRED ENABLE NOVALIDATE
) PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255 NOCOMPRESS NOLOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01" ;
CREATE INDEX "SCME"."LNK_PROFILEENTITY_SUB_HNDX3" ON "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" ("SUBSCRIPTION_SERVICEVARIANT_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 2097152
BUFFER_POOL DEFAULT)
TABLESPACE "INDX02" GLOBAL PARTITION BY HASH ("SUBSCRIPTION_SERVICEVARIANT_ID")
(PARTITION "SYS_P53501"
TABLESPACE "INDX02",
PARTITION "SYS_P53502"
TABLESPACE "INDX02",
PARTITION "SYS_P53499"
TABLESPACE "INDX02",
PARTITION "SYS_P53500"
TABLESPACE "INDX02") ;
CREATE INDEX "SCME"."PROFILE_ID_NDX43" ON "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" ("PROFILE_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "INDX03" ;
ALTER TABLE "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" ADD CONSTRAINT "PK_PROFILEENTITY_SUBSSERVVAR" PRIMARY KEY ("SUBSCRIPTION_SERVICEVARIANT_ID", "PROFILE_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 4194304
BUFFER_POOL DEFAULT)
TABLESPACE "INDX02" GLOBAL PARTITION BY HASH ("SUBSCRIPTION_SERVICEVARIANT_ID","PROFILE_ID")
(PARTITION "SYS_P52989"
TABLESPACE "INDX02",
PARTITION "SYS_P52990"
PARTITION "SYS_P53498"
TABLESPACE "INDX02",
PARTITION "SYS_P53499"
TABLESPACE "INDX02",
PARTITION "SYS_P53500"
TABLESPACE "INDX02") ENABLE NOVALIDATE;
ALTER TABLE "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" MODIFY ("SUBSCRIPTION_SERVICEVARIANT_ID" NOT NULL ENABLE NOVALIDATE);
ALTER TABLE "SCME"."LNK_PROFILEENTITY_SUBSSERVVAR" MODIFY ("PROFILE_ID" NOT NULL ENABLE NOVALIDATE); -
Range partition by a virtual column derived from XMLTYPE
I want to create table and partition it by interval partion (range partition) on a virtual column which is derived from XMLTYPE i get ora-14513 error.
create table dicom_archive_virtual
id integer not null primary key,
parent_id integer, -- where this image is created from
dcm_filename varchar2(60), -- DICOM image file name from import
description varchar2(100), -- description of the image
dicom orddicom, -- DICOM data
image ordimage, -- DICOM data in JPEG format
thumb ordimage, -- DICOM data in JPEG thumbnail
metadata xmltype, -- user customized metadata
isAnonymous integer, -- accessible flag for the research role.
study_date date as
(to_date(substr(extractValue(metadata,'//DATE/text()'),1,10),'yyyy-mm-dd')) virtual)
PARTITION BY RANGE (study_date)
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
( PARTITION p_2005 VALUES LESS THAN (TO_DATE('1-1-2006', 'DD-MM-YYYY')),
PARTITION p_2006 VALUES LESS THAN (TO_DATE('1-1-2007', 'DD-MM-YYYY')),
PARTITION p_2007 VALUES LESS THAN (TO_DATE('1-1-2008', 'DD-MM-YYYY'))
Study_date is a virtual colum which is derived from the column metadata which is of type XMLTYPE,so when i partition on this virtual column i get the follwoing error
SQL Error: ORA-14513: partitioning column may not be of object datatype
So i want to know whether this is not possible or there is any other alternative to achieve this.I want to create table and partition it by interval partion (range partition) on a virtual column which is derived from XMLTYPE Congratulations on trying to fit as many cutting edge techniques into a single line as possible.
So i want to know whether this is not possible ...The error message is pretty unequivocal.
...or there is any other alternative to achieve this.What you could try is materializing the virtual column, i.e. adding an actual date column which you populate with that code in the insert and update triggers. Inelegant but then complexity often is.
Cheers, APC
blog : http://radiofreetooting.blogspot.com -
There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
the script below (SQLCMD mode, set DataDrive & LogDrive variables for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
location of data.
--=================================================================================
-- PartitionLabSetup_RangeRight.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE RIGHT FOR VALUES
0,
15,
30,
45,
60
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
--:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (15);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The T-SQL code below illustrates the problem.
-- PartitionLab_RangeRight
USE workspace;
DROP TABLE dbo.TestTableOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3 ;
-- ERROR
--Msg 5042, Level 16, State 1, Line 1
--The file 'TestTable_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE TestTable_f2 ;
-- Works surprisingly!!
use workspace;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
--Msg 622, Level 16, State 3, Line 2
--The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
--The statement has been terminated.
If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
-- RANGE RIGHT
-- Rerun PartitionLabSetup_RangeRight.sql before the code below
USE workspace;
DROP TABLE dbo.TestTableOut;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3;
-- Works as expected!!
The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
the left file group (File Group 2) is retained when splitting ranges.
RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
on the test table but uses different boundary definitions and RANGE LEFT.
--=================================================================================
-- PartitionLabSetup_RangeLeft.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE LEFT FOR VALUES
-1,
14,
29,
44,
59
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (14);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data.
If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
investigating this.
NOTE 10/03/2014 - The solution
The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
-- Wrong Boundary Point Range Right
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (15);
-- Wrong Boundary Point Range Left
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (14);
-- Correct Boundary Pounts for MERGE
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (0); -- or -1 for RANGE LEFT
The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
file groups and apologize :-)Hi Paul Brewer,
Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
Regards.
Sofiya Li
Sofiya Li
TechNet Community Support
Maybe you are looking for
-
It steams movie previews so i know it is connected to the internet and it sees my movies but when i click on them to play they take unusually long and then if they start they just dont load. I have had this unit for a year and have not had any proble
-
How to get rid of invisible duplicate photos in iPhoto '09?
Hello All, In the course of transfering my data from my external drive (clone of my iBook) to my new MacBook Pro, I transferred all the photos I had in iPhoto on the iBook. Afterwards, I realized the photos on the clone version of iPhoto were not nec
-
Computer loosing itself in a crazy way
I have a Power Mac G4 AGP,450MHz,384 MB running OSX 10.3.9 that will intermidiatly bring up a dialog box in the center of my screen (with a different background) that shows many different languages and a message that I need to press and hold the star
-
Mac g5 crashes and burns multiple times a day... HELP!!!
Hello, I'm a recent mac user, been a pc kid for most of my life, and I just started using a g5 at work and I like the OS but the computer will randomly shut down without regard to any particular pattern. Most of the time it just freezes and won't res
-
hi, Is there a way to have the aperture program itself on one monitor, and a slideshow on the other? Whenever I tried it it would always use both monitors, or just have 1 monitor 'switched off'. cheers