Primary Key Causing Problem in Interval Partition Exchange

DB : 11.2.0.2
OS : AIX 6.1
I am getting the problem while exchanging data with interval partitioned table. I have a interval partitioned table and a normal staging table having data to be uploaded.
Following are the steps i am doing.
SQL> CREATE TABLE DEMO_INTERVAL_DATA_LOAD (
                ROLL_NUM        NUMBER(10),
                CLASS_ID        NUMBER(2),
                ADMISSION_DATE  DATE,
                TOTAL_FEE       NUMBER(4),
                COURSE_ID       NUMBER(4))
                PARTITION BY RANGE (ADMISSION_DATE)
                INTERVAL (NUMTOYMINTERVAL(3,'MONTH'))
                ( PARTITION QUAT_1_2012 VALUES LESS THAN (TO_DATE('01-APR-2012','DD-MON-YYYY')),
                 PARTITION QUAT_2_2012 VALUES LESS THAN (TO_DATE('01-JUL-2012','DD-MON-YYYY')),
                 PARTITION QUAT_3_2012 VALUES LESS THAN (TO_DATE('01-OCT-2012','DD-MON-YYYY')),
                 PARTITION QUAT_4_2012 VALUES LESS THAN (TO_DATE('01-JAN-2013','DD-MON-YYYY')));
Table created.
SQL> ALTER TABLE DEMO_INTERVAL_DATA_LOAD ADD CONSTRAINT IDX_DEMO_ROLL PRIMARY KEY (ROLL_NUM);
Table altered.
SQL> SELECT TABLE_OWNER,
           TABLE_NAME,
           COMPOSITE,
           PARTITION_NAME,
       PARTITION_POSITION,
          TABLESPACE_NAME,
       LAST_ANALYZED
FROM DBA_TAB_PARTITIONS
    WHERE TABLE_OWNER='SCOTT'
   AND TABLE_NAME='DEMO_INTERVAL_DATA_LOAD'
   ORDER BY PARTITION_POSITION;
TABLE_OWNER                    TABLE_NAME                     COM PARTITION_NAME                 PARTITION_POSITION TABLESPACE_NAME                LAST_ANAL
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_1_2012                                     1 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_2_2012                                     2 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_3_2012                                     3 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_4_2012                                     4 USERS
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD VALUES (10,1,'12-MAR-2012',1000,90);
1 row created.
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD VALUES (11,5,'01-JUN-2012',5000,80);
1 row created.
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD VALUES (12,9,'12-SEP-2012',4000,20);
1 row created.
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD VALUES (13,7,'29-DEC-2012',7000,10);
1 row created.
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD VALUES (14,8,'21-JAN-2013',2000,50); ---- This row will create a new interval partition in table.
1 row created.
SQL> commit;
SQL> SELECT TABLE_OWNER,
        TABLE_NAME,
        COMPOSITE,
        PARTITION_NAME,
        PARTITION_POSITION,
        TABLESPACE_NAME,
        LAST_ANALYZED
  FROM DBA_TAB_PARTITIONS
     WHERE TABLE_OWNER='SCOTT'
   AND TABLE_NAME='DEMO_INTERVAL_DATA_LOAD'
   ORDER BY PARTITION_POSITION;
TABLE_OWNER                    TABLE_NAME                     COM PARTITION_NAME                 PARTITION_POSITION TABLESPACE_NAME                LAST_ANAL
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_1_2012                                     1 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_2_2012                                     2 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_3_2012                                     3 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_4_2012                                     4 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  SYS_P98                                         5 USERS  
SYS_P98 partition is added to table automatically.
SQL> CREATE TABLE DEMO_INTERVAL_DATA_LOAD_Y (
                ROLL_NUM        NUMBER(10),
                CLASS_ID        NUMBER(2),
                ADMISSION_DATE  DATE,
                TOTAL_FEE       NUMBER(4),
                COURSE_ID       NUMBER(4));
Table created.
SQL> INSERT INTO DEMO_INTERVAL_DATA_LOAD_Y VALUES (30,3,'21-MAY-2013',2000,12);
1 row created.
SQL> commit;
Commit complete.
Since, i need a partition in DEMO_INTERVAL_DATA_LOAD table, which can be used in partition exchange, so i create a new partition as below:
SQL> LOCK TABLE DEMO_INTERVAL_DATA_LOAD PARTITION FOR (TO_DATE('01-APR-2013','DD-MON-YYYY')) IN SHARE MODE;
Table(s) Locked.
SQL> SELECT TABLE_OWNER,
           TABLE_NAME,
           COMPOSITE,
           PARTITION_NAME,
           PARTITION_POSITION,
           TABLESPACE_NAME,
           LAST_ANALYZED
FROM DBA_TAB_PARTITIONS
    WHERE TABLE_OWNER='SCOTT'
   AND TABLE_NAME='DEMO_INTERVAL_DATA_LOAD'
   ORDER BY PARTITION_POSITION;
TABLE_OWNER                    TABLE_NAME                     COM PARTITION_NAME                 PARTITION_POSITION TABLESPACE_NAME                LAST_ANAL
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_1_2012                                     1 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_2_2012                                     2 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_3_2012                                     3 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  QUAT_4_2012                                     4 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  SYS_P98                                         5 USERS
SCOTT                          DEMO_INTERVAL_DATA_LOAD        NO  SYS_P102                                        6 USERS
SQL> ALTER TABLE DEMO_INTERVAL_DATA_LOAD
EXCHANGE PARTITION SYS_P102
WITH TABLE DEMO_INTERVAL_DATA_LOAD_Y
INCLUDING INDEXES
WITH VALIDATION;
ALTER TABLE DEMO_INTERVAL_DATA_LOAD
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITIONNow, if i disable/drop the primary key constraint, it works without any problem.
SQL> alter table DEMO_INTERVAL_DATA_LOAD disable constraint IDX_DEMO_ROLL;
Table altered.
SQL> alter table DEMO_INTERVAL_DATA_LOAD drop constraint IDX_DEMO_ROLL;
Table altered.
SQL> ALTER TABLE DEMO_INTERVAL_DATA_LOAD
EXCHANGE PARTITION SYS_P102
WITH TABLE DEMO_INTERVAL_DATA_LOAD_Y
INCLUDING INDEXES
WITH VALIDATION;
Table altered.
SQL> select * from DEMO_INTERVAL_DATA_LOAD partition (SYS_P102);
  ROLL_NUM   CLASS_ID ADMISSION  TOTAL_FEE  COURSE_ID
        30          3 21-MAY-13       2000         12
SQL> select * from DEMO_INTERVAL_DATA_LOAD_Y;
no rows selectedPlease suggest.

First, thanks for posting the code that lets us reproduce your test. That is essential for issues like this.
Because the primary key is global you will not be able to use
INCLUDING INDEXES
WITH VALIDATION;And you will need to add the primary key to the temp table
ALTER TABLE DEMO_INTERVAL_DATA_LOAD_Y ADD CONSTRAINT IDX_DEMO_ROLL_Y PRIMARY KEY (ROLL_NUM);The the exchange will work. You will need to rebuild the primary key after the exchange.

Similar Messages

  • A challenging primary key generation problem..

    Hi !
    I have the following problem (this is in theory, not (yet) in practice):
    I have this set of entities with its respective primary keys in the database.. 0,1,2,3,4,5,...............,N
    In a certain moment in the future X, a transaction deletes record (for example) n < N.
    In a certain moment in the future Y after X, a transaction persists new entity Q.
    What primary key does it get ?
    Is it Q=n or is it Q=N+1 ?
    In case the latter, how can I achieve the former ? Is there JPA specified way to do it, or I shall use the "stupid" way.
    I.e. In general I want the gaps in the sequence (caused by deletion of entities), to be filled with the newly persisted ones.
    All suggestions are welcome :)
    P.S. If someone knows the actual difference between the PrimaryKeyGenerationTypes in JPA (SEQUENTIAL, IDENTITY, etc...) please, tell me :P
    Edited by: javaUserMuser on Aug 7, 2008 5:37 PM

    javaUserMuser wrote:
    Well 2^63 = 9 billion billion.. well, thats a lot really. but anyway using the gaps is preferable for me ;]You've still said nothing to support this view except "because I said so". I don't think you need it. Here's why:
    Assume there are 2^63 = 9 billion billion keys available to you. If your application consumes them at the rate of 1 key per millisecond, it'll take you ~36 billion years to exhaust all those keys. Since science estimates the age of the universe as 13-14 billion years, and creationists say it's 6000 years, I'd say you're safe if you throw away a key or two.
    You know your app. Adjust accordingly. Calculate how fast you'd have to consume keys to exhaust them all in 100 years for fun and compare it to your real requirements.
    Well there can be involved more than one db... In general how does a rdbms scale horizontally ? Can N-count physical nodes serve one logical DB ?Depends on the product. Consult the database you're using. Which product will you use?
    What kind of them ?See the calculation above.
    >
    I got what those IDENTITY etc. things mean, but.... anyway how do those differ not in the perspective of the database, rather of that of the developer <-- that is my question :PAs a developer, it means that I'd have to use a different id generator scheme in my configuration.
    In general, I like your posts :) Discussion is the mother of progressThank you, I'm glad you find them to be helpful.
    %

  • Problem in Direct Partition Exchange Loading(PEL)

    Hi all,
    I am facing a problem during execution of a OWB mapping. The map is using direct partition exchange loading. There are one source and one target table in the map. it is very simple.
    While the source and target are in the same schema there is no problem. But when they are in different schemas during execution a warning is given. That is:
    ResolveTableNameErrorRTV20006;BIA_RTL_INTERFACE"."SALE_SRC
    Here BIA_RTL_INTERFACE is source schema and SALE_SRC is source table.
    However the source record is going to the target table as required but the same record also exists in the same table after execution which is not desirable for direct PEL. So i think though the loading is done in target but it is not using the direct Partition Exchange Technique.
    Is there some kind of special privileage for Partition Exchange Loading when the source and target are in different schemas in the same database? Plase clarify this. Hope i can explain the problem. Waiting for reply.
    Thanks & Regards,
    Sumanta Das
    Kolkata

    The error means you are trying to swap a partition that still contains data in a configuration where OWB expects an empty partition. How did you set the "Replace existing data in Target Partition" configuration parameter?
    Also, for more details on PEL, review the 10.19 to 10.27 pages of the user manual.
    Regards:
    Igor

  • Primary key field problem in DBSchema Wizard

    Hi,
    I am using Sun ONE studio 4 update 1, EE.
    I have generated schema using the Database Schema Wizard.
    The generated table representations does not recognise the primary key fields so I can not map my CMP to any table :-(
    Is this a bug?
    because I have generated schemas from 3 different databases (IBM, MySQL, SAP) and the tables have primary key fields.
    When I get a connection in Runtime Pane under Database node, I can see the primary fields are highlighted with red color.
    However, when I obtain the schema using, New->Database->Database Schema, there are no primary key fields generated.
    Anyone have any ideas, work around?
    Thanks
    Tex...

    First, thanks for posting the code that lets us reproduce your test. That is essential for issues like this.
    Because the primary key is global you will not be able to use
    INCLUDING INDEXES
    WITH VALIDATION;And you will need to add the primary key to the temp table
    ALTER TABLE DEMO_INTERVAL_DATA_LOAD_Y ADD CONSTRAINT IDX_DEMO_ROLL_Y PRIMARY KEY (ROLL_NUM);The the exchange will work. You will need to rebuild the primary key after the exchange.

  • TM causes problem on other partition of split drive

    I have a LaCie 320 d2 which is partitioned with 120GB for Time Machine and the rest for general storage. All goes well on the TM side, but the remaining partition has had problems twice now.
    The symptoms are the general area being unable to accept any more data, giving the error that the current file is in use. It doesn't matter what type the file is, or how large. When it reaches its limit, it will clunk a few times and stop with an error message.
    The latest time happened when I was transferring a folder containing 11GB of mixed files from my desktop to the general area. I left it to get on with it, but while I was away TM started to backup, including the 11GB folder on my desktop. This leads me to believe that TM was trying to read the file at the same time as Finder was copying it. Whatever problem it encountered was left on the HD and stops it from writing any more data.
    I can't remember whether there was a similar 'double copy' scenario the first time. The problem can be fixed only by erasing the drive entirely (one partition only doesn't help) then repartitioning and restarting TM.
    I do have other storage, on the network, so if the problem occurs again I will leave the whole drive as TM. I'd rather not, as a FW drive is very handy to have.
    If anyone has had any similar problems or knows how the problem is caused/might be solved, please let me know.
    Many thanks.

    matti-oats wrote:
    I have a LaCie 320 d2 which is partitioned with 120GB for Time Machine and the rest for general storage. All goes well on the TM side, but the remaining partition has had problems twice now.
    The symptoms are the general area being unable to accept any more data, giving the error that the current file is in use. It doesn't matter what type the file is, or how large. When it reaches its limit, it will clunk a few times and stop with an error message.
    The latest time happened when I was transferring a folder containing 11GB of mixed files from my desktop to the general area. I left it to get on with it, but while I was away TM started to backup, including the 11GB folder on my desktop. This leads me to believe that TM was trying to read the file at the same time as Finder was copying it. Whatever problem it encountered was left on the HD and stops it from writing any more data.
    I can't remember whether there was a similar 'double copy' scenario the first time. The problem can be fixed only by erasing the drive entirely (one partition only doesn't help) then repartitioning and restarting TM.
    I do have other storage, on the network, so if the problem occurs again I will leave the whole drive as TM. I'd rather not, as a FW drive is very handy to have.
    If anyone has had any similar problems or knows how the problem is caused/might be solved, please let me know.
    Many thanks.
    Welcome to the Apple boards.
    Do you have TM set to backup the other half of the external drive? That might cause an issue.
    I would suggest doing your copying right after TM ends a backup. While LaCie drives are quite good, you are really asking it to multitask if you are copying a file to it while TM is trying to backup both the original file and the one being copied. You may find that some of your files are locked and marked open which can cause problems.
    Delete any files that were being copied during this process and recopy them, unless you have already deleted the original, in which case you might find it on TM, in which case you can copy the TM backup onto your internal drive and then over to the other partition after removed the bad file.

  • 1-1 mapping Primary Key association problem

    Hi, I have two tables Indiv (Id, col1, col2 ) and IndivExt(Id, col3)....where "Id" is the PKey in both the tables.
    In TopLink, I have mapped
    Indiv table --> IndivClass
    IndivExt table --> IndivExtClass
    class IndivClass{
    Integer Id;
    Integer col1;
    Integer col2;
    class IndivExtClass{
    IndivClass oIndiv; // relation - one-to-one
    Integer col3;
    While running the app, there is an Integrity TopLink exception that says that IndivClass descriptor doesnt have IndivExt table in it.
    So, I used advanced feature of multi-table info and created the Primary Key association between these two descriptors.
    If I am right, This association assumes that there is a row in IndivExt for every row in Indiv...correct me if im wrong. I assume this coz of the kinda sql query it creates when im trying to read a IndivClass object. The sql query uses a join on both the tables.
    But...For every row in Indiv table there MIGHT or MIGHT NOT be a row in IndivExt table.
    So the sql that is formed will not return the required Indiv rows (in case there are no corresponding IndivExt rows).
    How do I resolve this.
    Thanks,
    Krishna

    Hi James,
    Thanx for that update.
    So just for clarification...
    Im going to remove the multi-table info.
    And use normal foreign key reference,
    and apply the patch...then the Integrity error
    should not occur.
    Right ?
    One more question... If I also want to refer IndivExt from the Indiv object...then I would change my classes
    to be like...
    class IndivClass{
    Integer Id;
    Integer col1;
    Integer col2;
    IndivExt oIndivExt; //relation - one-to-one (rel2)
    class IndivExtClass{
    IndivClass oIndiv; // relation - one-to-one (rel1)
    Integer col3;
    and the mapping (for rel2) in IndivClass descriptor will use the same reference that was used by rel1...ie., a normal foreign key ref. Is that right ?
    And also...Is there a bug number related to this issue and how do I get the patch ? Where can I download it from?
    Thanks,
    Krishna

  • Add another Primary key column problem

    Hi all
    I do not Add primary key in existing table .how it possiable
    say I have Existing Table
    EMPNO NUMBER(4), primary key
    ENAME VARCHAR2(10 BYTE),
    JOB VARCHAR2(9 BYTE),
    EMPSLNO VARCHAR2(10 BYTE)
    Now Add another Primary key column(EMPSLNO)
    my coding this
    ALTER TABLE EMP_TEST ADD (
    CONSTRAINT PK_EMP_TEST PRIMARY KEY (EMPSLNO))
    but error message
    Error on line 0
    ALTER TABLE EMP_TEST ADD (
    CONSTRAINT PK_EMP_TEST PRIMARY KEY (EMPSLNO))
    ORA-02260: table can have only one primary key

    ..Or, too much few information, wild guess, (composite index)
    Do this :
    ALTER TABLE <your_table> ADD (
         CONSTRAINT PK_mycode PRIMARY KEY (head_code,item_code )
              USING INDEX TABLESPACE tbls_index;

  • Primary Key Transport Problem

    Dear Gurus,
    I had a requirement wherein I had to make an existing non-key field of a Z-table as a Primary Key field, so I moved that non key field next to the existing primary key fields & made it as a key field by going se14 'Adjust and Activate' functionality.
    Now, when the requested was imported to quality it showed error and no changes were seen.. What should I do now, there are lot of programs dependent to that Z table.. Should I ask to activate and adjust in the Quality too?... Then how about Production??
    Thank you for your kind perusal.

    Hi Hari,
    As per my past experience, you dont need to activate and adjust the database manually in Production.
    Check the same with basis team.
    Also, i will advise you to take table backup before deploying transports to Production because there are chances of reduction in table entries if any duplicate record will be identified for your new primary key fields combination.
    Regards,
    Sudeesh Soni

  • Clarification: Decommissioning Exchange Mailbox server after move to Office 365 will not cause problems with the remaining Exchange CAS server

    Environment: 1x Exchange 2013 Mailbox server
    1x Exchange 2013 CAS server
    All users migrated to office365. MX record pointed to Office365
    DIRSync implemented
    Clarification: All users are now using Office365. As per recommendation from Microsoft there should be 1 exchange server to be retained and the rest can be decommissioned. I tried to test the scenario by shutting down the exchange server
    with the mailbox role and leaving the exchange server with CAS role online. I tried to run Exchange Management shell on the CAS but I'm getting errors. To clarify, once I have uninstalled the Exchange server mailbox will the CAS still look for the mailbox
    server? Or do I need to decommission both Exchange servers and then install a new Exchange server with CAS role?

    Hi
    If you are looking for a hybrid coexistence with office 365 then at least one Exchange 2013 Client Access and one
    Exchange 2013 Mailbox server must be installed in the on-premises organization to run the Hybrid Configuration wizard and support Exchange 2013-based hybrid deployment functionality.
    http://technet.microsoft.com/en-us/library/hh534377(v=exchg.150).aspx
    Summary - You need to have at-least one CAS and MBX combined together in onpremise or it can even be seperate CAS and seperate MBX but microsoft recommends to have both CAS and MBX together in onpremise
    Source - 
    http://technet.microsoft.com/en-us/library/hh534377(v=exchg.150).aspx
    Remember to mark as helpful if you find my contribution useful or as an answer if it does answer your question.That will encourage me - and others - to take time out to help you Check out my latest blog posts on http://exchangequery.com Thanks Sathish
    (MVP)

  • PrintScreen key causes problems

    Hi
    My movie reacts to space key to perform some action
    on keyUp
    if (_key.keycode = 49) then sendallsprites(#SpacePressed)
    end
    But when the user press after that printScreen key, keyUp
    event is executed,
    but _key.keycode is still 49 that responds to space-key. and
    the application
    reacts to printscreen as to space-key.
    Keydown event is not executed on pressing printScreen key, so
    using
    _key.keyPressed() function I can not check if really space is
    pressed,
    In on keyUp handler _key.keyPressed(49) is false even if user
    pressd space
    (because key is already up)
    _key.keycode is readOnly so can not get reed of value 49,
    until another key
    with some value is pressed
    Is there any way to avoid this?
    Thanx in advance
    Orest

    "Jacks007" <[email protected]> wrote in
    message
    news:goka0j$1m8$[email protected]..
    > In a New Authorware file set up an INTERACTION ICON with
    a CALC ICON
    > hanging of it using KEYPRESS with a "?" (without the "")
    as the CALC ICONS
    > name. The CALC ICON contains "MyKey = Key" (again
    without the ""). On
    > the INTERACTIONS DISPLAY WINDOW I typed the text
    > "MyKey = {MyKey}" (without the ""). INTERACTION ICON set
    to update
    > Displayed Variables. Just keep pressing keys to see.
    A single display icon (set to 'Update Displayed Variables')
    containing {Key}
    will suffice. Why do it the hard way!
    Chris Forecast

  • Can I modify primary key or I need to drop it to create another overhide?

    I am getting this issue when I try to create another primary key in my table. The example bellow express the situation.
    TABLE AAA
    (ID
    NAME
    REGISTER)
    PK: ID
    I cannot drop the primary key, cause the table have some FK, so, I was thinking about to try "modify" this key (to put too other field), but I have not found one way to do it, I think just I can drop out the FKs, drop PK after, and create another new primary key after all, but this issue can be bring on a big trouble with other tables, there is another way to fix it, modifyng or just overhide the key?
    Thanks all and apologize about my bad english!

    Disable the refrencing FKs, then drop and recreate the PK. You will also have to modify the FKs. You can, alternatively, create a unique constraint on the new column combination.

  • ORA-02266: unique/primary keys - error while using Exchange Partition

    Hi All,
    While using EXCHANGE PARTITION statement as given below,
    ALTER TABLE SOURCE_TABLE EXCHANGE PARTITION PRT_EXCG_PRTN WITH TABLE TARGET_TABLE
    we are getting this error,
    ORA-02266: unique/primary keys in table referenced by enabled foreign keys
    However, there are no tables have foreign keys referring this TARGET_TABLE, we checked this by referring
    USER_CONSTRAINTS table, it has only primary key and NOT NULL constraints.
    SELECT * FROM USER_CONSTRAINTS WHERE TABLE_NAME like 'TARGET_TABLE';
    We are using the following version,
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE     9.2.0.6.0     Production
    TNS for IBM/AIX RISC System/6000: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    Is it due to any error in our end or it could be a bug in Oracle and should we go for any patch updation ?
    Please guide us to resolve this error as soon as possible, thank you.
    Regards,
    Deva

    *** Duplicate Post ***
    Please Ignore.

  • Partition exchange with Primary Key Enabled : is it possible ?

    Hello everybody.
    I have succeeded "exchange...INCLUDING INDEXES WITH VALIDATION" a single table with a partition when primary key of the single table and the partitioned table are both Disabled and Validated.
    I am trying now to do the same thing but with the 2 primary keys Enabled. Because of the global index générated by the Enabled PK, i "exchange ... including indexes with validation update global indexes" : it doesn't work : "ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION".
    But indexes are the same (thanks OEM) that in the previous test, all attributes of the 2 PKs are the same.
    Thanks in advance for your help.

    For those interested. I finally found the solution elsewhere in the documentation.
    If the PK of the partitioned is enabled, the primary key of the single table has to be Disabled and Validated.

  • Problems creating a partitioned primary key index.

    I am creating a partitioned table and I noticed that when I use the constraint option of the create table the primary key is not partitioned. I then tried using the using index clause and specifying the create index local and that is giving errors. Here is my current syntax that is causing the errors:
    create table redef_temp (
         USER_ID          VARCHAR2(32),
         GROUP_ID     VARCHAR2(32),
         JOIN_DATE     DATE DEFAULT SYSDATE NOT NULL,
         constraint primary key
         using index (create index pk_redef_temp
    on redef_temp (USER_ID, GROUP_ID)
    LOCAL STORE IN (IDX)))
    tablespace data
    partition by hash (user_id)
         (PARTITION ic_x_user_group_part_p1 tablespace DATA,
         PARTITION ic_x_user_group_part_p2 tablespace DATA,
         PARTITION ic_x_user_group_part_p3 tablespace DATA,
         PARTITION ic_x_user_group_part_p4 tablespace DATA)
    PARALLEL ENABLE ROW MOVEMENT;
    Thanks

    The following works on 9.2.0.8 and 10.2.0.3:
    create table redef_temp (
         USER_ID VARCHAR2(32),
         GROUP_ID VARCHAR2(32),
         JOIN_DATE DATE DEFAULT SYSDATE NOT NULL,
         constraint pk_redef_temp primary key (user_id, group_id)
         using index (
              create index pk_redef_temp
              on redef_temp (USER_ID, GROUP_ID)
              LOCAL tablespace test_8k
    tablespace test_8k
    partition by hash (user_id) (
         PARTITION ic_x_user_group_part_p1 tablespace test_8k,
         PARTITION ic_x_user_group_part_p2 tablespace test_8k,
         PARTITION ic_x_user_group_part_p3 tablespace test_8k,
         PARTITION ic_x_user_group_part_p4 tablespace test_8k
    PARALLEL ENABLE ROW MOVEMENT
    /Your syntax for the constraint definition was wrong, and your use of 'store in' for the index tablespace was wrong. I've had to change all tablespace names to 'test_8k'.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Primary Key - non sequenced - can this cause problems??

    We are seeing some strange explain plans for some queries in our 10.2.0.3 database. It just seems to me that Oracle is not using the index(es) that we expect it to use. In our discussions it was brought up that the data in the Primary Key fields in our tables are not sequential at all. I am wondering if this can be causing some issues with Oracle and it's explain plans.
    Can the fact that our PK fields are not sequential be causing some issues with Oracle and the optimizer?

    Things to consider:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm#sthref1858
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#sthref1254
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i82005
    Message was edited by:
    dbtoo
    system stats - http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41496

Maybe you are looking for

  • Item category TAC in grad Mode in sales Order

    Hi all, I am creating sales order with reference to sales contract but at the time of sales order we don't have enoff stock to deliver to customer , so we have deiced to Third party sales process But the problem is while creating a sales order with r

  • Distiller error

    I am getting this error when doing a batch conversion of Word documents to PDF (dragging them to distiller) %%[ Error: undefined; OffendingCommand: PK    ]%% %%[ Flushing: rest of job (to end-of-file) will be ignored ]%% %%[ Warning: PostScript error

  • Header row

    I am moving to a Mac from a PC using Excel. In my Excel spreadsheet the top row stays on top no matter how far I scroll down. (It's easy to set up.) I can't seem to be able to do it in Numbers. Can anyone help me with this? Thanks, Fred

  • ECM - Eligibility

    I have configured an eligibility rule variant and grouping based on my merit compensation plan and employee group and sub-group.  I setup the feature to look for full-time employees.  I did not set any strict eligibility rules because the system is b

  • Is it possible to add text when using flashing button property node?

    I am trying to get a text button that displays TEST to show "running" when it flashes.  I am using a property node for flashing.  At present it cycles between TEST and default color which is set in flashing clolor options.