Trying to convert Interval Partitioned Table to Range..Exchange Partition..
Requirement:
Replace Interval partitioned Table by Range Partitioned Table
DROP TABLE A;
CREATE TABLE A
a NUMBER,
CreationDate DATE
PARTITION BY RANGE (CreationDate)
INTERVAL ( NUMTODSINTERVAL (30, 'DAY') )
(PARTITION P_FIRST
VALUES LESS THAN (TIMESTAMP ' 2001-01-01 00:00:00'));
INSERT INTO A
VALUES (1, SYSDATE);
INSERT INTO A
VALUES (1, SYSDATE - 30);
INSERT INTO A
VALUES (1, SYSDATE - 60);I need to change this Interval Partitioned Table to a Range Partitioned Table. Can I do it using EXCHANGE PARTITION. As if I use the conventional way of creating another Range Partitioned table and then :
DROP TABLE A_Range
CREATE TABLE A_Range
a NUMBER,
CreationDate DATE
PARTITION BY RANGE (CreationDate)
(partition MAX values less than (MAXVALUE));
Insert /*+ append */ into A_Range Select * from A; --This Step takes very very long..Trying to cut it short using Exchange Partition.Problems:
I can't do
ALTER TABLE A_Range
EXCHANGE PARTITION MAX
WITH TABLE A
WITHOUT VALIDATION;
ORA-14095: ALTER TABLE EXCHANGE requires a non-partitioned, non-clustered table
This is because both the tables are partitioned. So it does not allow me.
If I do instead :
create a non partitioned table for exchanging the data through partition.
Create Table A_Temp as Select * from A;
ALTER TABLE A_Range
EXCHANGE PARTITION MAX
WITH TABLE A_TEMP
WITHOUT VALIDATION;
select count(*) from A_Range partition(MAX);
-Problem is that all the data goes into MAX Partition.
Even after creating a lot of partitions by Splitting Partitions, still the data is in MAX Partition only.
So:
-- Is it that we can't Replace an Interval Partitioned Table by Range Partitioned Table using EXCHANGE PARTITION. i.e. We will have to do Insert into..
-- We can do it but I am missing something over here.
-- If all the data is in MAX Partition because of "WITHOUT VALIDATION" , can we make it be redistributed in the right kind of range partitions.
You will need to pre-create the partitions in a_range, then exchange them one by one from a to a tmp then then to arange. Using your sample (thanks for proviing the code by the way).
SQL> CREATE TABLE A
2 (
3 a NUMBER,
4 CreationDate DATE
5 )
6 PARTITION BY RANGE (CreationDate)
7 INTERVAL ( NUMTODSINTERVAL (30, 'DAY') )
8 (PARTITION P_FIRST
9 VALUES LESS THAN (TIMESTAMP ' 2001-01-01 00:00:00'));
Table created.
SQL> INSERT INTO A VALUES (1, SYSDATE);
1 row created.
SQL> INSERT INTO A VALUES (1, SYSDATE - 30);
1 row created.
SQL> INSERT INTO A VALUES (1, SYSDATE - 60);
1 row created.
SQL> commit;
Commit complete.You can find the existing partitions form a using:
SQL> select table_name, partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'A';
TABLE_NAME PARTITION_NAME HIGH_VALUE
A P_FIRST TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P44 TO_DATE(' 2013-01-28 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P45 TO_DATE(' 2012-12-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
A SYS_P46 TO_DATE(' 2012-11-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAYou can then create table a_range with the apporopriate partitions. Note that you may need to create additional partitions in a_range because interval partitioning does not create partitions that it has no data for, even if that leaves "holes" in the partitioning scheme. So, based on the above:
SQL> CREATE TABLE A_Range (
2 a NUMBER,
3 CreationDate DATE)
4 PARTITION BY RANGE (CreationDate)
5 (partition Nov_2012 values less than (to_date('30-nov-2012', 'dd-mon-yyyy')),
6 partition Dec_2012 values less than (to_date('31-dec-2012', 'dd-mon-yyyy')),
7 partition Jan_2013 values less than (to_date('31-jan-2013', 'dd-mon-yyyy')),
8 partition MAX values less than (MAXVALUE));
Table created.Now, create a plain table to use in the exchanges:
SQL> CREATE TABLE A_tmp (
2 a NUMBER,
3 CreationDate DATE);
Table created.and exchange all of the partitions:
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p44
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION jan_2013
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p45
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION dec_2012
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A
2 EXCHANGE PARTITION sys_p46
3 WITH TABLE A_tmp;
Table altered.
SQL> ALTER TABLE A_Range
2 EXCHANGE PARTITION nov_2012
3 WITH TABLE A_tmp;
Table altered.
SQL> select * from a;
no rows selected
SQL> select * from a_range;
A CREATIOND
1 23-NOV-12
1 23-DEC-12
1 22-JAN-13John
Similar Messages
-
Materialized view on a Partitioned Table (Data through Exchange Partition)
Hello,
We have a scenario to create a MV on a Partitioned Table which get data through Exchange Partition strategy. Obviously, with exchange partition snap shot logs are not updated and FAST refreshes are not possible. Also, this partitioned table being a 450 million rows, COMPLETE refresh is not an option for us.
I would like to know the alternatives for this approach,
Any suggestions would be appreciated,
thank youFrom your post it seems that you are trying to create a fast refresh view (as you are creating MV log). There are limitations on fast refresh which is documented in the oracle documentation.
http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028
If you are not planning to do a fast refresh then as already mentioned by Solomon it is a valid approach used in multiple scenarios.
Thanks,
Jayadeep -
Ora 14098 Index mismatch for tables in Alter Exchange Partition
Hi All,
I want to exchange data from retek schema to CONV schema. Both the table have same partition, but there is no data in CONV table.
So I'm populating the data from retek of one particular partition into one staging table and then I'm doing exchanging partition with staging table to CONV table.
I have created the same index and constraints for staging as there are in CONV table.
But When I'm doing exchange partition I'm getting error Index mismatch.
v_parition_name:='mar 2012'
v_stmt := 'create table staging_tab_st_hist as ( select * from retek.abc_st_hist partition(' ||
v_parition_name || ') )';
execute immediate v_stmt;
v_stmt := ' alter table conv.abc_st_hist exchange partition ' ||
v_parition_name ||
' with table staging_tab_st_hist
including indexes without validation';
execute immediate v_stmt;Welcome to the forum!
Whenever you post provide your 4 digit Oracle version (result of SELECT * FROM V$VERSION).
>
Hi All,
I want to exchange data from retek schema to CONV schema. Both the table have same partition, but there is no data in CONV table.
So I'm populating the data from retek of one particular partition into one staging table and then I'm doing exchanging partition with staging table to CONV table.
I have created the same index and constraints for staging as there are in CONV table.
But When I'm doing exchange partition I'm getting error Index mismatch.
v_parition_name:='mar 2012'
v_stmt := 'create table staging_tab_st_hist as ( select * from retek.abc_st_hist partition(' ||
v_parition_name || ') )';
execute immediate v_stmt;
v_stmt := ' alter table conv.abc_st_hist exchange partition ' ||
v_parition_name ||
' with table staging_tab_st_hist
including indexes without validation';
execute immediate v_stmt;
>
I don't see any index creation on the staging table. You said this
>
I have created the same index and constraints for staging as there are in CONV table.
>
But you didn't create the indexes. When you do the CTAS (create table as select) it only creates the table with the same structure; it doesn't create ANY indexes.
Add the code to create the necessary indexes after you populate the staging table. -
Modify HUGE HASH partition table to RANGE partition and HASH subpartition
I have a table with 130,000,000 rows hash partitioned as below
----RANGE PARTITION--
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)(
PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
Data: -
INSERT INTO TEST_PART
VALUES ('2000',200001,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200009,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200010,'CM');
VALUES ('2006',NULL,'CM');
COMMIT;
Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
The table after restructuring should look like the one below
COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)
SUBPARTITION BY HASH (C_NBR) (
PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Pls advice
Thanks in advanceSorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
I have a table with 130,000,000 rows hash partitioned as below
----HASH PARTITION--
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY HASH (C_NBR)
PARTITIONS 2
STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Data: -
INSERT INTO TEST_PART
VALUES ('2000',200001,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200009,'CM');
INSERT INTO TEST_PART
VALUES ('2000',200010,'CM');
VALUES ('2006',NULL,'CM');
COMMIT;
Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
The table after restructuring should look like the one below
COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
CREATE TABLE TEST_PART(
C_NBR CHAR(12),
YRMO_NBR NUMBER(6),
LINE_ID CHAR(2))
PARTITION BY RANGE (YRMO_NBR)
SUBPARTITION BY HASH (C_NBR) (
PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
Pls advice
Thanks in advance -
Importing partitioned table data into non-partitioned table
Hi Friends,
SOURCE SERVER
OS:Linux
Database Version:10.2.0.2.0
i have exported one partition of my partitioned table like below..
expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
OS:Linux
Database Version:10.2.0.4.0
Now when i am importing into another server i am getting below error
Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MAPPING"."SYS_IMPORT_FULL_01": MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE failed to create with error:
ORA-00959: tablespace 'MAPPING_ABC' does not exist
Failing sql is:
CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
Processing object type TABLE_EXPORT/TABLE/COMMENT
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
Regards
Umesh Guptayes, i have tried that option as well.
but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
Regards
UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
Wait for some EXPERT and GURU's review on this issue .........
Good luck ....
--neeraj -
GG replication for interval partitioned tables while issuing drop partition command
Hi all, we have golden gate replication between two databses 1 and 2. table A in 1 and 2 is interval partitioned but the partition names are different. whats the best way to achieve GG replication using drop partition. We want to drop partition automatically in DB 2 if done in DB 1.
Hi,
In this scenario ypu would better to drop manually on both the database, especially for drop you could filter based on operation type and do it manually. -
Bulk text data insertion in Partitioned Table
Hi All,
I want to insert more than 200 crore records from 5 text data files in a single table say TAB1 (PARTITIONED table with RANGE type).
Currently I am using EXTERNAL TABLE concept and providing 5 filenames (of total 45 crore records) at a time to external table definition access criteria LOCATION() with COMMA ",".
After that using following sql statement:
INSERT /*+ APPEND PARALLEL(6) */ INTO TAB1 SELECT * FROM TAB1_EXT;
Presently The execution time statistics for 45 crore records is as follows:
1) With /*+ PARALLEL(6) */ hint, execution time : 81 Mins
2) with /*+ APPEND PARALLEL */ hint, execution time : 73 Mins
3) With cursor loop; bulk collect and FORALL with interval of 1000000 records, execution time : 98 Mins
Earlier the execution time was too much but after changing and trying with variation in SQL logic, DB parameter, and H/w, now it is somewhat good.
Now INDEXES are also removed from table. NOLOGGING is purposely not using as it will not be considerable at PROD environment.
Also Database is not in ARCHIVE mode now.
I also tried with change in DB parameters like SGA_MAX_SIZE which is now 16GB. I am thinking even to change DB_BLOCK_SIZE value from 8K to 32K.
O/S: LINUX
ORACLE DB: ORACLE 11g
RAM: 256 GB
48 CPU CORES, AMD Opteron (HP DL 585 Model)
If you have any suggestion, please pass it to me.Instead of disturbing parallel coordinators you could just submit n jobs each doing a simple <b>insert into big_table select * from external_table_n</b> or something similar - all external_tables_i having the same structure just different location parameters.
Regards
Etbin
Edited by: Etbin on 26.2.2011 12:25
possibly grouping text data files according to partitions the data would end within -
Error while creating partition table
Hi frnds i am getting error while i am trying to create a partition table using range
getting error ORA-00906: missing left parenthesis.I used the following statement to create partition table
CREATE TABLE SAMPLE_ORDERS
(ORDER_NUMBER NUMBER,
ORDER_DATE DATE,
CUST_NUM NUMBER,
TOTAL_PRICE NUMBER,
TOTAL_TAX NUMBER,
TOTAL_SHIPPING NUMBER)
PARTITION BY RANGE(ORDER_DATE)
PARTITION SO99Q1 VALUES LESS THAN TO_DATE(‘01-APR-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q2 VALUES LESS THAN TO_DATE(‘01-JUL-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q3 VALUES LESS THAN TO_DATE(‘01-OCT-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q1 VALUES LESS THAN TO_DATE(‘01-APR-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q2 VALUES LESS THAN TO_DATE(‘01-JUL-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q3 VALUES LESS THAN TO_DATE(‘01-OCT-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2001’, ‘DD-MON-YYYY’)
;More than one of them. Try this instead:
CREATE TABLE SAMPLE_ORDERS
(ORDER_NUMBER NUMBER,
ORDER_DATE DATE,
CUST_NUM NUMBER,
TOTAL_PRICE NUMBER,
TOTAL_TAX NUMBER,
TOTAL_SHIPPING NUMBER)
PARTITION BY RANGE(ORDER_DATE) (
PARTITION SO99Q1 VALUES LESS THAN (TO_DATE('01-APR-1999', 'DD-MON-YYYY')),
PARTITION SO99Q2 VALUES LESS THAN (TO_DATE('01-JUL-1999', 'DD-MON-YYYY')),
PARTITION SO99Q3 VALUES LESS THAN (TO_DATE('01-OCT-1999', 'DD-MON-YYYY')),
PARTITION SO99Q4 VALUES LESS THAN (TO_DATE('01-JAN-2000', 'DD-MON-YYYY')),
PARTITION SO00Q1 VALUES LESS THAN (TO_DATE('01-APR-2000', 'DD-MON-YYYY')),
PARTITION SO00Q2 VALUES LESS THAN (TO_DATE('01-JUL-2000', 'DD-MON-YYYY')),
PARTITION SO00Q3 VALUES LESS THAN (TO_DATE('01-OCT-2000', 'DD-MON-YYYY')),
PARTITION SO00Q4 VALUES LESS THAN (TO_DATE('01-JAN-2001', 'DD-MON-YYYY')))In the future, if you are having problems, go to Morgan's Library at www.psoug.org.
Find a working demo, copy it, then modify it for your purposes. -
Issue with updating partitioned table
Hi,
Anyone seen this bug with updating partitioned tables.
Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
I'd be interested if anyone has seen this and has a version of Sybase without the issue.
Unfortunately when it happens on a replicated table - it takes down rep server.
CREATE TABLE #table1
( PK char(8) null,
FileDate date,
changed bit
CREATE TABLE partitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL
LOCK DATAROWS
PARTITION BY RANGE (ValidTo)
( p2014 VALUES <= ('20141231') ON [default],
p2015 VALUES <= ('20151231') ON [default],
pMAX VALUES <= (MAX) ON [default]
CREATE UNIQUE CLUSTERED INDEX pk
ON partitioned(PK, ValidFrom, ValidTo)
LOCAL INDEX
CREATE TABLE unpartitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
LOCK DATAROWS
CREATE UNIQUE CLUSTERED INDEX pk
ON unpartitioned(PK, ValidFrom, ValidTo)
insert partitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert unpartitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert #table1
select "ET00jPzh", "Jan 15 2015", 1
union all
select "ET00jPzh", "Jan 15 2015", 1
go
update partitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join partitioned p on (p.PK = t.PK)
where p.ValidTo = '99991231'
and t.changed = 1
go
update unpartitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join unpartitioned u on (u.PK = t.PK)
where u.ValidTo = '99991231'
and t.changed = 1
go
drop table #table1
go
drop table partitioned
drop table unpartitioned
gowrt to replication - it is a bit unclear as not enough information has been stated to point out what happened. I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should. You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making.
wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself. There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
-- testing with tinyint
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed tinyint
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned
-- duplicating with 'int'
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed int
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned -
How to find partitioned tables whose loading may fail with ORA-14400 error
Hi,
We have several partitioned tables. Sometimes, when partitions are not created for that month, we get this error
ORA-14400 inserted partition key does not map to any partition
May I know if there is any script or sql which I can use to find out tables which may fail for loading if DBA won't create partitions in next 2 weeks?
Thank You
Sarayu K.S.Sure, you can:
1.Look in DBA_TAB_PARTITIONS by partition_name if you use a particular naming convention, where owner = <your_owner> and table_name = <your_table_name> and partition_name = <your_expected_partition_name>
2. Without going by partition name, you could use dba_tab_partitions and instead check that there is a partition for your table with a high_value you are expecting to use. Be aware that the high_value is a long datatype, so you will likely need to convert it to a varchar2.
3. Before your big load, you could also insert one row into the destination table with the new partition key that will be used, and trap errors with an exception handler. Roll it back if it succeeds.
4. Many partitions could be created in advance to avoid the constant monitoring of new partitions.
5. Or you could also look into switching to interval partitioning if you are on 11g and get out of the business of having to create partitions manually. -
Create local spatial index on range sub-partitions?
Is is possible to create a local spatial index on a table with range sub-partitions? We're trying to do this on a table that contains lots of x,y,z point data.
Trying to do so gives me the error: ORA-29846: cannot create a local domain index on a composite partitioned tableAccording to the spatial documentation:The following restrictions apply to spatial index partitioning:
- The partition key for spatial tables must be a scalar value, and must not be a spatial column.
- Only range partitioning is supported on the underlying table. All other kinds of partitioning are not currently
supported for partitioned spatial indexes.So there is nothing saying it can or can't be done. The examples I've seen in the documentation tend to partition based on a single value and don't use subpartitioning.
Example of what we're trying to do:SQL> SELECT * FROM V$VERSION;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for 64-bit Windows: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
SQL>
SQL> --- Create a table, partioned by X and subpartitioned by Y
SQL> CREATE TABLE sub_partition_test
2 (
3 x NUMBER,
4 y NUMBER,
5 z NUMBER,
6 geometry MDSYS.SDO_GEOMETRY
7 )
8 PARTITION BY RANGE (x)
9 SUBPARTITION BY RANGE (y)
10 (
11 PARTITION p_x100 VALUES LESS THAN (100)
12 (
13 SUBPARTITION sp_x100_y100 VALUES LESS THAN (100),
14 SUBPARTITION sp_x100_y200 VALUES LESS THAN (200),
15 SUBPARTITION sp_x100_yMAXVALUE VALUES LESS THAN (MAXVALUE)
16 ),
17 PARTITION p_x200 VALUES LESS THAN (200)
18 (
19 SUBPARTITION sp_x200_y100 VALUES LESS THAN (100),
20 SUBPARTITION sp_x200_y200 VALUES LESS THAN (200),
21 SUBPARTITION sp_x200_yMAXVALUE VALUES LESS THAN (MAXVALUE)
22 ),
23 PARTITION p_xMAXVALUE VALUES LESS THAN (MAXVALUE)
24 (
25 SUBPARTITION sp_xMAXVALUE_y100 VALUES LESS THAN (100),
26 SUBPARTITION sp_xMAXVALUE_y200 VALUES LESS THAN (200),
27 SUBPARTITION sp_xMAXVALUE_yMAXVALUE VALUES LESS THAN (MAXVALUE)
28 )
29 );
Table created.
SQL>
SQL> -- Insert some sample data
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (1, 1, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(1, 1, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (50, 150, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(50, 150, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (150, 150, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(150, 150, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (150, 250, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(150, 250, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (150, 300, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(150, 300, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (220, 210, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(220, 210, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (220, 150, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(220, 150, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (220, 250, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(220, 250, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (220, 300, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(220, 300, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (320, 250, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(320, 250, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (320, 160, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(320, 160, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (320, 290, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(320, 290, 50), NULL, NULL));
1 row created.
SQL> INSERT INTO sub_partition_test (x, y, z, geometry)
2 VALUES (320, 320, 50, SDO_GEOMETRY(3001, 2157, SDO_POINT_TYPE(320, 320, 50), NULL, NULL));
1 row created.
SQL>
SQL> -- Create some metadata
SQL> DELETE FROM user_sdo_geom_metadata WHERE TABLE_NAME = 'SUB_PARTITION_TEST';
1 row deleted.
SQL> INSERT INTO user_sdo_geom_metadata VALUES ('SUB_PARTITION_TEST','GEOMETRY',
2 SDO_DIM_ARRAY(
3 SDO_DIM_ELEMENT('X', 0, 1000, 0.005),
4 SDO_DIM_ELEMENT('Y', 0, 1000, 0.005)
5 ), 262152);
1 row created.
SQL>
SQL> -- Create an Unusable Local Spatial Index
SQL> CREATE INDEX sub_partition_test_spidx ON sub_partition_test (geometry)
2 INDEXTYPE IS MDSYS.SPATIAL_INDEX
3 LOCAL
4 UNUSABLE;
CREATE INDEX sub_partition_test_spidx ON sub_partition_test (geometry)
ERROR at line 1:
ORA-29846: cannot create a local domain index on a composite partitioned tableThanks,
JohnOk, thanks. That's what we're planning on doing now.
SQL> CREATE TABLE partition_test
2 (
3 x NUMBER,
4 y NUMBER,
5 z NUMBER,
6 geometry MDSYS.SDO_GEOMETRY
7 )
8 PARTITION BY RANGE (x, y)
9 (
10 PARTITION p_x100y100 VALUES LESS THAN (100, 100),
11 PARTITION p_x100y200 VALUES LESS THAN (100, 200),
12 PARTITION p_x100yMAX VALUES LESS THAN (100, MAXVALUE),
13 PARTITION p_x200y100 VALUES LESS THAN (200, 100),
14 PARTITION p_x200y200 VALUES LESS THAN (200, 200),
15 PARTITION p_x200yMAX VALUES LESS THAN (200, MAXVALUE),
16 PARTITION p_x300y100 VALUES LESS THAN (300, 100),
17 PARTITION p_x300y200 VALUES LESS THAN (300, 200),
18 PARTITION p_x300yMAX VALUES LESS THAN (MAXVALUE, MAXVALUE)
19 );
Table created.
SQL>
SQL> INSERT INTO user_sdo_geom_metadata VALUES ('PARTITION_TEST','GEOMETRY',
2 SDO_DIM_ARRAY(
3 SDO_DIM_ELEMENT('X', 0, 1000, 0.005),
4 SDO_DIM_ELEMENT('Y', 0, 1000, 0.005)
5 ), 262152);
1 row created.
SQL> CREATE INDEX partition_test_spidx ON partition_test (geometry)
2 INDEXTYPE IS MDSYS.SPATIAL_INDEX
3 LOCAL
4 UNUSABLE;
Index created. -
Import data from a partitioned table
I am trying to import data from another server database. I need only data from two partitions from a partitioned table in that database. I have to use IMPORT wizard.
The new server should also have the partitioned table with those two partitions data.I created the database on the destination server with a primary, log, FG1 and FG2 filegroups.
Next, created the partition function and scheme
create partition function pfOrders(int)
as range right
for values(34);
create partition scheme psOrders
as partition pfOrders
to (FG1,FG2)
go
Next, created the table on primary with primary key.
Next, used import wizard twice to import data related to partition_id 34 and 65.
The problem is the data is on primary filegroup. FG1 and FG2 does not have any data. -
Create Partitioning to non-partitioning Table
Dear All,
I have a table which is non partitioning and it has about 20G data... If I want to partition this table with column say DATE_M,
Please can anyone suggest the best way to do this.
ThanksSo now in the partitioned table he creates one maxvalue partition and does exchange partition
That isn't the typical scenario. Typically you make the switch by using partitions for the NEW data and leave the existing data in the base range partition.
1. Existing app uses an unpartitioned table
2. New table is partitioned for NEW DATA
Assume you want monthly partitions (daily works the same). This is already April so there is already some April data.
So create the partitioned table so the base partition clause includes ALL of the data for April and before:
create table ipart
(time_id date
,cust_id number(4)
,amount_sold number(5))
partition by range(time_id)
interval(NUMTOYMINTERVAL(1,'month'))
(partition old_data values less than (to_date('01-may-2015','DD-MON-YYYY')))
Now you do the exchange with the unpartitioned table and all the current data goes into that 'OLD_DATA' partition.
New data for May and the future will have partitions created automatically.
That approach lets you ease into partitioning without disrupting your current processes at all.
As time goes by more and more of the data will be in the new monthly partitions. If you need to you can split that base partition
insert into ipart (time_id) values (sysdate - 90);
insert into ipart (time_id) values (sysdate - 60);
insert into ipart (time_id) values (sysdate - 30);
insert into ipart (time_id) values (sysdate);
commit;
alter table ipart split partition old_data
at (to_date('01-jan-2015', 'DD-MON-YYYY')) into
(partition old_data, partition JAN_FEB_MAR_APR); -
Is my partition table corrupt? Why does Boot Camp hate me?
Hi folks
I have an iMac (27-inch, Mid 2010) (iMac11,3, with Boot ROM IM112.0057.B01).
I replaced the internal SuperDrive with an SSD, which is now my primary boot device:
iMac:/ michthom$ diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *250.1 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS SSD 248.1 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
iMac:/ michthom$ sudo gpt -r -vv show disk0
Password:
gpt show: disk0: mediasize=250059350016; sectorsize=512; blocks=488397168
gpt show: disk0: PMBR at sector 0
gpt show: disk0: Pri GPT at sector 1
gpt show: disk0: Sec GPT at sector 488397167
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 484620800 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC
485030440 1269536 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
486299976 2097159
488397135 32 Sec GPT table
488397167 1 Sec GPT header
So far so good.
I want to use the original internal HDD both to run Windows in Boot Camp mode, and to have a partition for my bulk data that doesn't need to be on the SSD.
I reformatted the HDD as a single HFS+ partition, GUID partition table.
I used BCA to create a Windows USB boot device from the Windows 8.1 media after following the hacking in this link.
When the iMac restarted after creating the 250Gb Windows partition on the internal HDD, I got the "no boot device" screen.
I restarted holding Option/Alt and booted from EFI Boot on the USB stick. Windows installer started, at least. Serial number accepted, on to picking a location.
The installation balked when I tried to select the BOOTCAMP partition, with the warning that the disk was formatted as MBR - eh? Why?
So, the current state of the internal HDD must be wrong somehow, but I don't see how to fix it (confidently) and would like someone to point me in the right direction (please!)
iMac:/ michthom$ diskutil list
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk1
1: EFI EFI 209.7 MB disk1s1
2: Apple_HFS Internal 751.9 GB disk1s2
3: Microsoft Basic Data BOOTCAMP 248.0 GB disk1s3
iMac:/ michthom$ sudo gpt -r -vv show disk1
gpt show: disk1: mediasize=1000204886016; sectorsize=512; blocks=1953525168
gpt show: disk1: Suspicious MBR at sector 0
gpt show: disk1: Pri GPT at sector 1
gpt show: disk1: Sec GPT at sector 1953525167
start size index contents
0 1 MBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 1468478336 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC
1468887976 263256
1469151232 484372480 3 GPT part - EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
1953523712 1423
1953525135 32 Sec GPT table
1953525167 1 Sec GPT header
gdisk has this to say:
iMac:/ michthom$ sudo gdisk /dev/disk1
Password:
GPT fdisk (gdisk) version 0.8.10
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
MBR: hybrid
BSD: not present
APM: not present
GPT: present
Found valid GPT with hybrid MBR; using GPT.
Command (? for help): x
Expert command (? for help): o
Disk size is 1953525168 sectors (931.5 GiB)
MBR disk identifier: 0x4F5BB38B
MBR partitions:
Number Boot Start Sector End Sector Status Code
1 1 409639 primary 0xEE
2 409640 1468887975 primary 0xAF
3 1469151232 1953523711 primary 0x0B
Expert command (? for help): p
Disk /dev/disk1: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3E1D7EF9-F86E-4552-8F40-BE9754C3C73F
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 8-sector boundaries
Total free space is 264685 sectors (129.2 MiB)
Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 1468887975 700.2 GiB AF00 Internal
3 1469151232 1953523711 231.0 GiB 0700 BOOTCAMP
Any help / pointers gratefully accepted!
MikeThanks to Loner T and some more reading, I think I'm now sorted out.
I found that marking the first partition on the USB stick as Active made no difference - my only option was to boot from the "EFI boot" option at startup (when holding down the alt/option key).
So to get the Windows installer to behave, I used gdisk to write a new protective MBR before rebooting to the USB stick, as shown below.
With the protective MBR in place (rather than hybrid), the Windows installer was happy to reformat the chosen partition and the installation began.
I'll try to report back once all is installed and working, but once again I owe my sanity to the generosity and patience of strangers!
Mike
bash-3.2# gdisk /dev/disk0
GPT fdisk (gdisk) version 0.8.10
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
MBR: hybrid
BSD: not present
APM: not present
GPT: present
Found valid GPT with hybrid MBR; using GPT.
Command (? for help): x
Expert command (? for help): o
<snipped>
Number Boot Start Sector End Sector Status Code
1 1 409639 primary 0xEE
2 409640 1468887975 primary 0xAF
3 1469151232 1953523711 primary 0x0B
Expert command (? for help): p
<snipped>
Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 1468887975 700.2 GiB AF00 Internal
3 1469151232 1953523711 231.0 GiB 0700 BOOTCAMP
Expert command (? for help): v
No problems found. 264685 free sectors (129.2 MiB) available in 3
segments, the largest of which is 263256 (128.5 MiB) in size.
Expert command (? for help): x
<snipped>
n create a new protective MBR
<snipped>
Expert command (? for help): n
Expert command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/disk0.
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Warning: The kernel may continue to use old or deleted partitions.
You should reboot or remove the drive.
The operation has completed successfully.
bash-3.2# gdisk /dev/disk0
GPT fdisk (gdisk) version 0.8.10
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): x
Expert command (? for help): o
Disk size is 1953525168 sectors (931.5 GiB)
MBR disk identifier: 0x00000000
MBR partitions:
Number Boot Start Sector End Sector Status Code
1 1 1953525167 primary 0xEE
Expert command (? for help): p
<snipped>
Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 1468887975 700.2 GiB AF00 Internal
3 1469151232 1953523711 231.0 GiB 0700 BOOTCAMP -
Partition Table Query taking Too Much Time
I have created partition table and Created local partition index on a column whose datatype is DATE.
Now when I Query table and use index column in the where clause It is scaning all the table (Full scan) . The quey is :
Select * From mytable
where to_char(transaction_date, 'DD-MON-YY') = '01-Aug-07';
I have to use to_char function not to_date due to Front end application problem.Before we go too far with this, if you manually query with TO_DATE on the variable instead of TO_CHAR on the column, does the query actually use the index?
The TO_CHAR on the column will definitely stop Oracle from using any index on the column. If the query will use the index if you TO_DATE the variable, as I see it, you have three options. First, fix the application problem that won't let you use TO_DATE from the application. Second, change the application to call a function returning a ref cursor, get the date string as a parameter to the function, and do the TO_DATE in the function.
Third, you could consider creating a function-based index on TO_CHAR(transaction_date, 'dd-Mon-yy'). This would be the least desirable option, particularly if you would also be selecting records based on a range of transaction_dates, since it loses a lot of information that the optimizer could use in devising an efficient query plan. It could also change your results for a range scan.
John
Maybe you are looking for
-
How can I print PDF attachments from ABAP report in transaction ME23N?
Hi, Users attach PDF files using "services for objects" in transaction ME23N. How can I print the PDF attachments from ABAP report ? Thanks in advance,,
-
A simple question I know but I couldn't find a clear answer. Can I capture DVCPro 50 to FCP using the decks firewire output to the Mac's FW input? Or do I need a third party piece of hardware.
-
How to change it if some data is wrong in ODS and infocube
hi all: could you please tell me how to change it if some data is wrong in ODS and infocube ? Best regards
-
HT4759 How do I find the icloud control panel in a Windows 8 PC environment
How do I find the icloud control panel in a Windows 8 PC environment
-
hi all, we r using 10g application server along with the reports service to deploy J2EE application The data source for the application is specified using the JNDI lookup Is it possible to use the same strategy for reports as well. i.e., we are using