Partition Tables - issue

I have lots of partitioned tables in my schema , the requirement is to identify the current partition of the table.
Let me explain this with an eg.
Lets say I have a table T , with 24 partitions .. starting for Jan -04 to Dec-05. Now , my current partition is Dec-04. Partition w.r.t sysdate
How can I achive this by querying the data dictionary

120196, I think you are mistaken. There is no such thing as "current" partition. Oracle offers 3 types of partition, i.e. range partitioning, hash partitioning and composite partitioning. For all 3 types of partitions, you specify the column you want to partition on. Then, then when you insert/update/delete rows, Oracle does "partition elimination" (if the columns you partitioned is part of the query), which greatly improves performance. So, the partition you deal with (I guess this is what you mean by current partition) is the same one your row has to deal with. Hope this makes it clearer and gives you a 10,000 foot view. There is lots more to partitioning.
-Raj Suchak
[email protected]

Similar Messages

  • How to dd Partition Table through First Partition

    Hi; I have a flash drive which has a single NTFS partition that fills up less space than the whole drive (so there's a bunch of empty/unused space at the end of the drive). I'd like to use dd to make an image from the very beginning of the drive (to catch the partition table) through the end of the NTFS partition, ignoring the empty space after it. I tried the commands below but they didn't seem to work (a copy of the drive had partition table issues):
    dd count=1 bs=512 if=/dev/sdb of=partition_table.img
    dd bs=8M if=/dev/sdb1 of=sdb1.img
    cat sdb1.img >> partition_table.img
    rm sdb1.img
    mv partition_table.img mydrive.img
    Any ideas? Am I failing to capture something between the partition table and the first partition? Or is it possible my partition table is not 512 bytes?

    My suggestion is to dd the MBR (dd if=/dev/XXX of=<image file> bs=512 count=1) and use ntfsclone or fsarchiver to make the image.
    But if you want to use dd for the whole image:
    $ fdisk -l /dev/sda
    Disk /dev/sda: 251.1 GB, 251059544064 bytes, 490350672 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x4d9ddc04
    Device Boot Start End Blocks Id System
    /dev/sda1 2048 2099199 1048576 83 Linux
    /dev/sda2 * 2099200 2303999 102400 83 Linux
    /dev/sda3 2304000 490350671 244023336 8e Linux LVM
    To image everything through the first partition, I would (dd if=/dev/sda of=image bs=512 count=2099200). I stop at 1+ the end of the partition, because the first sector is 0, rather than 1. You can also modify the bs for better performance, but you'll have to modify the count accordingly. For a 1M bs I think you divide the count by 2048 (FYI, this is completely unrelated to the fact that the partition starts on sector 2048). But I would suggest you do the math yourself.

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Deadlock issue in Oracle 10g Partitioned Tables

    Hi ALL,
    I am facing an issue of Deadlock while inserting data into a partitioned table.
    I get an error "ORA-00600: Deadlock detected". when i see the trace files, following lines are appearing in them:
    "Single resource deadlock: blocking enqueue which blocks itself".
    Here is the detail of my test case:
    1. I have a list-partitioned table, with partitioning defined on some business codes.
    2. I have a query that merges data into partitioned table (actually compares unique keys between temporary table and partitioned table and then issue an insert if keys not matched, no update part).
    3. The temporary table contains transactional data against many business codes.
    3. when calling the above query from multiple (PL/SQL) sessions, i observe that when we merge data in same partition (from different sessions) than deadlock issue occurs, otherwise it is OK.
    4. Note that all sessions are executed at same time. Also note that Commit is called after each session is completed. Each session contains 2-3 more queries after the mentioned merge statement.
    Is there an issue with oracle merge/insert on same partition (from different sessions)? What is the locking mechanism for this particular case (partitioned tables)?
    My oracle version is Oracle 10g (10.2.0.4). Kindly advice.
    Thanks,
    QQ.

    Could you print the deadlock tree so we can see the type and mode of the locking. (Please use the 'code' tags - see FAQ at top right of screen - to showthe output in fixed font). can you list any SQL operated by this session that gets reported in the trace file.
    Does the table reference itself in a foreign key.
    Is this table involved in any referential integrity constraints.
    Do you have a global primary key index, or a local primary key index ?
    Are there any triggers on the table - if so do they contain autonomous transactions.
    At present the only though that springs to mind is that the merge command has to lock the target table to do the insert/update, but it also has to lock any child table. The mode of the child lock depends on whether it has a suitable index or not, and whether the child table IS also the parent table. If you have two merges to the same partition one partition may get its locks, and the other partition may be in a state where it can't get one of the locks because it's wait for the other. (This shouldn't be a self-deadlock, though, but the scenario might be heading in the right direction for a self-deadlock).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." (Stephen Hawking)

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Deadlock issue in Partitioned Tables

    Hi ALL,
    I am facing an issue of Deadlock while inserting data into a partitioned table.
    I get an error "ORA-00600: Deadlock detected". when i see the trace files, following lines are appearing in them:
    "Single resource deadlock: blocking enqueue which blocks itself".
    Here is the detail of my test case:
    1. I have a list-partitioned table, with partitioning defined on some business codes.
    2. I have a query that merges data into partitioned table (actually compares unique keys between temporary table and partitioned table and then issue an insert if keys not matched, no update part).
    3. The temporary table contains transactional data against many business codes.
    3. when calling the above query from multiple (PL/SQL) sessions, i observe that when we merge data in same partition (from different sessions) than deadlock issue occurs, otherwise it is OK.
    4. Note that all sessions are executed at same time. Also note that Commit is called after each session is completed. Each session contains 2-3 more queries after the mentioned merge statement.
    Is there an issue with oracle merge/insert on same partition (from different sessions)? What is the locking mechanism for this particular case (partitioned tables)?
    My oracle version is Oracle 10g (10.2.0.4). Kindly advice.
    Thanks,
    QQ.

    Oracle MERGE statements are slow as they must validate every record before insert.
    If you use array processing with BULK COLLECT and FORALL with the SAVE EXCEPTIONS clause you can avoid most of the overhead. Just collect your rows in an array, issue a FORALL INSERT SAVE EXCEPTIONS and let Oracle handle whatever happens.
    When Oracle is done, and it will be hundreds of times faster than what you are doing now, you can either process or ignore the records in the exceptions array.
    Another solution, more efficient if you can do it, is to just to an INSERT INTO SELECT FROM using an exceptions table created with DBMS_ERRLOG.
    www.psoug.org/reference/dbms_errlog.html

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • ISSUE: Constarint on Partition table going in UNUSABLE stage

    Hi,
    I am facing problem in Oracle partition table.
    when i was droped OR add new partition in partition table in that condition Primary Constraint are in "UNUSABLE".
    please tell me how to do contraints always valid stage when i will drop OR add new partition in partition table.
    Please see example of how i am facing that problem ->
    TTT_USSD_USERS_DATEWISE table type= Partition table
    SQL> select PARTITION_NAME,HIGH_VALUE,PARTITION_POSITION,TABLESPACE_NAME from user_tab_partitions where TABLE_NAME='TTT_USSD_USERS_DATEWISE';
    PARTITION_NAME HIGH_VALUE PARTITION_POSITION TABLESPACE_NAME
    P_USSD02 '2012-09' 1 TS_USSD2
    P_USSD01 '2012-10' 2 TS_USSD1
    P_USSDD MAXVALUE 3 TS_USSDD
    SQL> alter table TTT_USSD_USERS_DATEWISE add CONSTRAINT PK_REPORTER PRIMARY KEY (MSISDN);
    SQL> select index_name,status from user_indexes where table_name ='TTT_USSD_USERS_DATEWISE';
    INDEX_NAME STATUS
    PK_REPORTER VALID
    SQL> SQL> alter table TTT_USSD_USERS_DATEWISE drop partition P_USSD02;
    Table altered.
    SQL> alter table TTT_USSD_USERS_DATEWISE split partition P_USSDD at ('2012-11') INTO (partition P_USSD02 tablespace TS_USSD2, partition P_USSDD tablespace TS_USSDD);
    Table altered.
    SQL> select PARTITION_NAME,HIGH_VALUE,PARTITION_POSITION,TABLESPACE_NAME from user_tab_partitions where TABLE_NAME='TTT_USSD_USERS_DATEWISE';
    PARTITION_NAME HIGH_VALUE PARTITION_POSITION TABLESPACE_NAME
    P_USSD01 '2012-10' 1 TS_USSD1
    P_USSD02 '2012-11' 2 TS_USSD2
    P_USSDD MAXVALUE 3 TS_USSDD
    SQL> select index_name,status from user_indexes where table_name ='TTT_USSD_USERS_DATEWISE';
    INDEX_NAME STATUS
    PK_REPORTER UNUSABLE
    Reg...

    Dear Sir, understand my exact issue please. i am very tired to search this issue solution.
    In my office have Production server. on that server 24X7 insertions in oracle Partition table.
    On that table We have Primary key "PK_REPORTER1".
    And in Server we rotate partition monthly and on some other table daily like that-
    1)
    SQL> select PARTITION_NAME,HIGH_VALUE,PARTITION_POSITION,TABLESPACE_NAME from user_tab_partitions where TABLE_NAME='TTT_USSD_USERS_DATEWISE';
    PARTITION_NAME HIGH_VALUE PARTITION_POSITION TABLESPACE_NAME
    P_USSD02 '2013-01' 1 TS_USSD2
    P_USSD01 '2013-02' 2 TS_USSD1
    P_USSDD MAXVALUE 3 TS_USSDD
    2) then drop very last partition.
    3) and again new partition for new month.
    << issue is here we when i add partition here Parimery key index UNUSABLE stage >>
    << and on production have not more time for rebuild OR again create these indexes >>
    Reg....

  • One laptop issue and one partition table request.

    Title says is all.
    I'd like one partition table. I want to install Windows Vista (HomeBasic - HP's OEM) with ArchLinux (dualboot). My hard disk has only 160GBs.
    I surely want a partition for my data files.
    Then, I want to learn a way, for my laptop fan, to get the fuck. It is veeeeeeeeeeeery loud all the time.
    ... and to get lower cpu temp.
    That's it, could you help me please?

    This is issue with TAB
    Try to include this code and & check
    DATA:BEGIN OF tab_temp OCCURS 0,
    CDOCT LIKE ZDNTINF-CDOCT,
    DDTEXT LIKE DD07T-DDTEXT,
    END OF tab_temp.
    SELECT a~CDOCT b~DDTEXT FROM ZDNTDEPDOC as a
    INNER JOIN DD07T as b
    on a~CDOCT = b~DOMVALUE_L
    INTO TABLE TAB_TEMP
    WHERE b~DOMNAME = 'ZCDCT'.
      loop at TAB_TEMP.
        tab = TAB_TEMP-CDOCT. append tab.
        tab = TAB_TEMP-DDTEXT append tab.
      endloop.

  • No more optical drive and windows 8 issue: "Windows cannot be installed on this disk as it has an MBR partition table."

    I have an Early 2011 13" MBP going strong with an SSD and original HDD installed in the optical drive. However, having had success with windows 7 on my old HDD I was planning to install Windows 8.1 (from .iso) on my SSD. However, for the past day or so it's been a nightmare.
    I tried the official procedure but by editing the bootcamp info.plist so I could boot from USB (you can't install windows from an external optical drive apparently) partitioning as bootcamp wanted too.
    I then tried it a number of different ways using disk utility etc and with an empty space
    However, I always seem to end up with the error "“Windows cannot be installed on this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only installed on GPT disks“
    I read some interesting stuff here http://www.royhochstenbach.com/installing-windows-8-1-on-a-2013-mac-pro/
    He points out that
    "Windows 7 and 8 in x64 support EFI. Normally if you install Windows on a Mac and use the installation DVD, it boots into regular BIOS mode, thus can be installed on an MBR partition. I tried the same, but since the Mac Pro doesn’t have an optical drive I had to use an external drive. And apparently the Mac boots external optical drives in EFI mode too. The Bootcamp wizard is aware of this, and creates a GPT partition on a non-superdrive Mac but an MBR partition on a superdrive Mac."
    This means Bootcamp is essentially making the wrong type of partition?
    My real question is? How do I install it windows 8? I'm really on my last legs!
    I really don't want to have to open this thing up and reinsert my optical drive as its really really difficult to get out of the enclosure (would probably have to break it).
    A huge thanks in advance!

    Hi,
    Have you tried that suggestion?
    You also could use this commands to check and install again:
    Inside windows installer, hit Shift+F10 to get a command prompt, then run diskpart and type List disk to displays a list of disks and information about them, such as their size, amount of available free space, whether the disk is a basic or dynamic disk,
    and whether the disk uses the master boot record (MBR) or GUID partition table (GPT) partition style.
    and then select the target disk. Zap the drive (with the clean command), create GPT table (new gpt), create the GPT-EFI special partitions.
    Step-by-step instructions is here for reference:
    HOW TO: Use the Diskpart.efi Utility to Create a GUID Partition Table Partition on a Raw Disk in Windows
    http://support.microsoft.com/kb/297800?wa=wsignin1.0
    Then reboot so the firmware finds those partitions and adds the disk to the EFI-native boot order (Windows installer checks this).
    Karen Hu
    TechNet Community Support

  • Importing partitioned table data into non-partitioned table

    Hi Friends,
    SOURCE SERVER
    OS:Linux
    Database Version:10.2.0.2.0
    i have exported one partition of my partitioned table like below..
    expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
    OS:Linux
    Database Version:10.2.0.4.0
    Now when i am importing into another server i am getting below error
    Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MAPPING"."SYS_IMPORT_FULL_01":  MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39083: Object type TABLE failed to create with error:
    ORA-00959: tablespace 'MAPPING_ABC' does not exist
    Failing sql is:
    CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
    Regards
    Umesh Gupta

    yes, i have tried that option as well.
    but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
    one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
    Regards
    UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
    Wait for some EXPERT and GURU's review on this issue .........
    Good luck ....
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Partition Table Looks Different Between OSX and Windows 7

    Hey all,
    I recently replaced the hard drive on my 2007 iMac, going from the 320GB drive to a 1 TB drive. It actually worked! The previous drive was failing in very odd ways, though booting into the Windows side (more on dual boot later) always seemed to work, and S.M.A.R.T. always reported that the physical drive seemed OK.
    The previous drive (320 GB) had around 200GB devoted to OSX and 100GB partitioned off for a working Windows 7 installation (custom installed x64 Win7 Ultimate). I had the Windows system image backed up to my NAS, and had a Windows system bootable disc to restore that image.
    After replacing the drive (and almost crying that I had actually done it right), I first restored OSX from a Time Machine backup, and let it take the full 1TB of space as Journalled HFS+. Then, I used Disk Utility to shrink OSX down to 500GB, and created a second partition (formatted to NTFS) with the remaining 500GB.
    Now, restoring a Windows system image is an odd thing, as it tries to do a lot of partition work as opposed to simply restoring the Windows install to a partition. I tried Macrium Reflect first (made a backup in that, too), and it looked like it was going to let me restore to the second partition. It completed the restore...and the entire hard drive was hosed. Partitions had been moved, renamed, resized, and nothing was bootable. I had to use Recovery from an external USB thumb drive to go back to the single, full-drive install of OSX.
    Then I tried again. Made the second NTFS partition and used the basic Windows System Restore disk to restore from the standard system image I had on the NAS. I was not expecting this to work. But it did. Windows started showing up in Startup Manager when "option" was pressed on bootup, and both OSX and Windows booted properly and ran fine. This is where I (finally) get to the supreme oddities:
    OSX Disk Utility still reports two 500GB partitions, one for OSX and one for Windows.
    In OSX the Windows partition shows as having NO DATA on it. Not sure what would happen if I tried to write a file to it when mounted, but there is no data on it when viewed from OSX (I was always able to see the Windows files when I mounted that partition on the previous drive).
    The Windows partition does not show up as a valid bootable system in System Prefs --> Startup Disk (naturally, I suppose, since OSX doesn't think there is anything there).
    From the WINDOWS side, Windows still sees the old partition table: 200GB for the "unknown" HFS partition, and then the rest of the space can be devoted to Windows (started as 100GB, but I was ablt to expand it to use the remaining ~750GB!).  Windows thinks it can have 750GB of space even though I know its partition is only 500GB in size!
    Windows cannot see the OSX HFS partition data using HFSExplorer. It CAN see the HFS partition on the attached backup drive (the drive I use for Time Machine).
    GParted (a partition program on a Linux bootabld CD-ROM) shows the same partitions as OSX Disk Utility (2x500GB), and also thinks the Windows NTFS partition is empty (all space reports as "unused").
    Did I mention both OSX and Windows work fine???
    There are, of course, two other partitions on the drive: the first partition is the 200MB one I always see (EFI/GUID portion?), and then between the HFS and NTFS partitions is the 600MB recovery partition (which also shows at option-pressed boot time). OSX, GParted, and Windows see all four partitions, and in the same order. It is just that Windows sees the wrong sizes, and OSX cannot see any data in the Windows partition.
    Surely this is all going to break spectacularly at some point, isn't it? What if I ever did write a file to the Windows side from OSX, or what if OSX starts taking more space than the 200GB Windows thinks is the max for that partition? What if I try to make Windows use more than 500GB because it thinks it has almost 800GB to use? What if I defrag the Windows drive?
    I had no idea a partition table could look this goofy and yet still have everything be bootable and workable. Is there something I can do to get everything in sync? Basically, I am assuming I need to get Windows to do some low-level kung fu in Disk Manager in order to properly get everything lined up with the "right" partitions as reported by both GParted and OSX Disk Utility. But how do I do that?
    By the way, any ideas that totally nuke the drive and start from scratch are completely fine (if it seems like they are doing something different enough that I'd give it a try). I have good backups of both OSX and Windows and have restored them about a half dozen times already as I dealt with the previous failing hard drive and with trying to get dual-boot working again. Not to mention, this iMac is now my secondary machine to the new Mac Mini I got a couple weeks back when I wasn't sure how much more life I was going to get from this 6+ year old iMac.
    Thanks for listening to me ramble about this very odd issue, and a huge THANK YOU in advance to anyone who has ideas to help.
    Thanks,
    sutekh138

    Update:
    I am pretty sure the issue is a simple GPT/MBR discrepancy.
    I installed rEFIt and used it's partitioning tool (gptsync built in) upon bootup. It was able to show the GPT table and the MBR table, but it thinks the second partition of the drive (the Mac OSX bootable partition) is "extended" in the MBR table and says "will not touch this disk."
    However, it does look like an MBR sync should be straightforward, as there four partitions in the GPT table and four in the MBR (and MBR allows a max of four, AFAIK). I just need gptsync to relax some rules. I found a link to a supposedly newer version of gptsync compiled for OSX, so I will try that later.
    First, I will try Partition Wizard, a free tool I found for the Windows side. It has a "Repair MBR" option that I would have tried last night if I weren't running a new Windows Image Backup in case all of this goes haywire.  *smile*  The PW tool also has an option to change the MBR over to GPT entirely. That might work, but then I am not sure Windows 7 will boot (from what I read, x64 Win7 running on EFI-enabled hardware should work, but who knows).
    Anyway, I will try the following things, in order, until something works, when I get home tonight:
    From Windows, run Partition Wizard and try "Repair MBR".
    From OSX, download recent gptsync and try to run it.
    From Windows, use Partition Wizard to do a full MBR --> GPT conversion.
    Nuke the Windows partition in OSX Disk Utility, expand the HFS partition to take up the whole drive, and then add a Windows-bootable partition via Boot Camp-ish command line commands (diskutil). Because if nothing else works, I have to assume I just created the partitions wrong in the first place such that a Windows restore miraculously works, but the partition weirdness is just a timebomb waiting to happen.
    Finally, if none of the above work, I'll just get things back to the way they now work and wait for the timebomb to (possibly never) go off.  *smile*
    I'll update this thread if I get something figured out, in case anyone else stumbles upon it...
    Thanks,
    sutekh138

  • How to add new column in partition table

    Hi,
    In Oracle 10g Database, I have one table (X) with list partition . I have added one new column to "X" by "Alter Table" command. Please advise whether any other command needs to be executed since it is a partition table .
    The "X" table is used for partition swapping with another table (Y). I have added the same column also in table "Y". Will there be any issue while swapping the partion with the following command
    alter table X exchange partition partition_name with table Y
    Version Details :
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 OS
    Solaris 5.10
    Thanks in advance

    you would have to explicitly put that into the create table as select - the partition details.
    ops$tkyte%ORA10GR2> create table t1
    2 PARTITION BY RANGE (dt)
    3 (
    4 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
    5 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
    6 PARTITION junk VALUES LESS THAN (MAXVALUE)
    7 )
    8 COMPRESS
    9 as
    10 select * from t;
    Table created.
    Source:http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:69076630635645
    Hth
    Girish Sharma

  • Unrecognized partition table for drive 80 error??

    Currently running leopard on a macbook pro.
    Formatted w bootcamp and running Windows Vista Ultimate.
    Installation went fine reformated drive to ntfs 45gigs.
    But when booting into windows I get this error message
    "unrecognized partition table for drive 80"
    Then the system boots into vista as normal but that error message is annoying.
    Tells me to fix bootmgr using fdisk. How do I do that?
    Thanks.

    I previously had Vista installed under the beta with Tiger and there was no issues at all regarding boot times or error messages. (Boot times being quite nippy considering)
    I did reformat the hard drive space partitioned in side Vista installation as NTFS. As I am only a switcher for a year or so I imagine there may be something I am missing but given the text when I try to boot into Vista, it looks to be the boot loader that is erroring and that looks the same as a GRUB loader failing.
    I just do not get why a BETA product would work fine, then after a fresh installation of the new Mac OSX 10.5 with the final version of boot camp I would receive errors like this.
    I have reclaimed the hard drive space and split the disk again but the same error persists. Previously I was showing off my Macbook to work mates showing how it handled Windows as well as the user friendly OSX. Now all I can show them is a 5 minute loading screen full of non booting Vista as they look at me like some kind of idiot. (Which depending on who you ask is possibly correct)

Maybe you are looking for