Re-org of partitioned tables

Is there a way to re-org a range partitioned table?
I have a table which is range partitioned and till date of 2007 November.So the rest of the records are all in max partition.The size of the max is so huge (250 millions) it takes long time to split the max partition.And during the process of split the indexes on the table are getting invalid.So i am looking at a better way of doing the split.So is re-org possible on the max partition? During the process of re-org can i have the interim table with the new partitions?

Hello,
Can you post the link to the section you read this?
h3. I just tested following scenarios in my test setup and it worked just fine. You should try this test as well.
-- MASTER TABLE
CREATE TABLE TEST_PART_AA
  KEY       VARCHAR2(23 BYTE)                       NULL,
  KEY_DATE  DATE                                    NULL
PARTITION BY RANGE (KEY_DATE)
  PARTITION TEST_PART_A_200902 VALUES LESS THAN (TO_DATE(' 2009-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS, 
  PARTITION TEST_PART_MAX VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX TMP$$_TEST_PART_A_PK0 ON TEST_PART_AA
(KEY)
LOGGING
NOPARALLEL;
ALTER TABLE TEST_PART_AA ADD (
  CONSTRAINT TMP$$_TEST_PART_A_PK0
PRIMARY KEY
(KEY));
-- INTERIM TABLE
CREATE TABLE TEST_PART_A
  KEY       VARCHAR2(23 BYTE)                       NULL,
  KEY_DATE  DATE                                    NULL
PARTITION BY RANGE (KEY_DATE)
  PARTITION TEST_PART_A_200902 VALUES LESS THAN (TO_DATE(' 2009-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS, 
     PARTITION TEST_PART_A_200903 VALUES LESS THAN (TO_DATE(' 2009-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS, 
     PARTITION TEST_PART_A_200904 VALUES LESS THAN (TO_DATE(' 2009-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS, 
  PARTITION TEST_PART_MAX VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
Insert into TEST_PART_AA
   (KEY, KEY_DATE)
Values
   ('3', TO_DATE('02/08/2009 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into TEST_PART_AA
   (KEY, KEY_DATE)
Values
   ('1', TO_DATE('04/08/2009 16:22:49', 'MM/DD/YYYY HH24:MI:SS'));
Insert into TEST_PART_AA
   (KEY, KEY_DATE)
Values
   ('2', TO_DATE('03/08/2009 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
COMMIT;# redefintion pl/sql block
DECLARE
   num_errors   NUMBER;
BEGIN
   DBMS_REDEFINITION.can_redef_table ('USIUSER',
                                      'TEST_PART_AA',
                                      DBMS_REDEFINITION.cons_use_pk,
                                      NULL
   DBMS_REDEFINITION.start_redef_table ('USIUSER',
                                        'TEST_PART_AA',
                                        'TEST_PART_AC',
                                        NULL,
                                        DBMS_REDEFINITION.cons_use_pk,
                                        NULL,
                                        NULL
   DBMS_REDEFINITION.copy_table_dependents ('USIUSER',
                                            'TEST_PART_AA',
                                            'TEST_PART_AC',
                                            DBMS_REDEFINITION.cons_orig_params,
                                            TRUE,
                                            TRUE,
                                            TRUE,
                                            FALSE,
                                            num_errors,
                                            FALSE
   DBMS_OUTPUT.put_line('ERRORS OCCURED WHEN COPYING DEPENEDENTS OBJECTS '
                        || num_errors);
   DBMS_REDEFINITION.finish_redef_table ('USIUSER',
                                         'TEST_PART_AA',
                                         'TEST_PART_AC',
                                         NULL
END;Regards
Edited by: OrionNet on Apr 8, 2009 4:29 PM

Similar Messages

  • Error while creating partition table

    Hi frnds i am getting error while i am trying to create a partition table using range
    getting error ORA-00906: missing left parenthesis.I used the following statement to create partition table
    CREATE TABLE SAMPLE_ORDERS
    (ORDER_NUMBER NUMBER,
    ORDER_DATE DATE,
    CUST_NUM NUMBER,
    TOTAL_PRICE NUMBER,
    TOTAL_TAX NUMBER,
    TOTAL_SHIPPING NUMBER)
    PARTITION BY RANGE(ORDER_DATE)
    PARTITION SO99Q1 VALUES LESS THAN TO_DATE(‘01-APR-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q2 VALUES LESS THAN TO_DATE(‘01-JUL-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q3 VALUES LESS THAN TO_DATE(‘01-OCT-1999’, ‘DD-MON-YYYY’),
    PARTITION SO99Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q1 VALUES LESS THAN TO_DATE(‘01-APR-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q2 VALUES LESS THAN TO_DATE(‘01-JUL-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q3 VALUES LESS THAN TO_DATE(‘01-OCT-2000’, ‘DD-MON-YYYY’),
    PARTITION SO00Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2001’, ‘DD-MON-YYYY’)
    ;

    More than one of them. Try this instead:
    CREATE TABLE SAMPLE_ORDERS
    (ORDER_NUMBER NUMBER,
    ORDER_DATE DATE,
    CUST_NUM NUMBER,
    TOTAL_PRICE NUMBER,
    TOTAL_TAX NUMBER,
    TOTAL_SHIPPING NUMBER)
    PARTITION BY RANGE(ORDER_DATE) (
    PARTITION SO99Q1 VALUES LESS THAN (TO_DATE('01-APR-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q2 VALUES LESS THAN (TO_DATE('01-JUL-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q3 VALUES LESS THAN (TO_DATE('01-OCT-1999', 'DD-MON-YYYY')),
    PARTITION SO99Q4 VALUES LESS THAN (TO_DATE('01-JAN-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q1 VALUES LESS THAN (TO_DATE('01-APR-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q2 VALUES LESS THAN (TO_DATE('01-JUL-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q3 VALUES LESS THAN (TO_DATE('01-OCT-2000', 'DD-MON-YYYY')),
    PARTITION SO00Q4 VALUES LESS THAN (TO_DATE('01-JAN-2001', 'DD-MON-YYYY')))In the future, if you are having problems, go to Morgan's Library at www.psoug.org.
    Find a working demo, copy it, then modify it for your purposes.

  • FAT32 Partitions on GUID Partition Table External Drive Not Seen in Vista

    I have an interesting predicament. I have repartitioned my external hard drive to have 5 partitions: 2 HFS+ and 3 FAT32. The external drive has a GUID Partition Table. The drive was formatted and partitioned using Disk Utility.
    When I boot into Windows Vista using Boot Camp, Vista will ONLY mount the first Windows compatible partition. For example, if I have two partitions disk2s2 and disk2s3 (both FAT32), Vista only mounts disk2s2.
    The described partition setup worked well with Master Boot Record, minus the bug with Time Machine choking every so often.
    Any ideas on how I can get Vista to recognize these partitions?

    Hi Karl87,
    according to the german computer magazine c't there is a limitation/flaw in the Master Boot Record (MBR:
    Only the first four partitions are shown in the MBR and since one of these entrys is taken by the needed hidden EFI-partition and the next two entrys are taken by the OSX partition, there is only one of your three FAT partitions shown.
    This limitation to four partition entries is quite old.
    See here for further informations: http://en.wikipedia.org/wiki/Masterbootrecord
    To my knowledge there is no solution for this.
    Regards
    Stefan

  • Deadlock issue in Partitioned Tables

    Hi ALL,
    I am facing an issue of Deadlock while inserting data into a partitioned table.
    I get an error "ORA-00600: Deadlock detected". when i see the trace files, following lines are appearing in them:
    "Single resource deadlock: blocking enqueue which blocks itself".
    Here is the detail of my test case:
    1. I have a list-partitioned table, with partitioning defined on some business codes.
    2. I have a query that merges data into partitioned table (actually compares unique keys between temporary table and partitioned table and then issue an insert if keys not matched, no update part).
    3. The temporary table contains transactional data against many business codes.
    3. when calling the above query from multiple (PL/SQL) sessions, i observe that when we merge data in same partition (from different sessions) than deadlock issue occurs, otherwise it is OK.
    4. Note that all sessions are executed at same time. Also note that Commit is called after each session is completed. Each session contains 2-3 more queries after the mentioned merge statement.
    Is there an issue with oracle merge/insert on same partition (from different sessions)? What is the locking mechanism for this particular case (partitioned tables)?
    My oracle version is Oracle 10g (10.2.0.4). Kindly advice.
    Thanks,
    QQ.

    Oracle MERGE statements are slow as they must validate every record before insert.
    If you use array processing with BULK COLLECT and FORALL with the SAVE EXCEPTIONS clause you can avoid most of the overhead. Just collect your rows in an array, issue a FORALL INSERT SAVE EXCEPTIONS and let Oracle handle whatever happens.
    When Oracle is done, and it will be hundreds of times faster than what you are doing now, you can either process or ignore the records in the exceptions array.
    Another solution, more efficient if you can do it, is to just to an INSERT INTO SELECT FROM using an exceptions table created with DBMS_ERRLOG.
    www.psoug.org/reference/dbms_errlog.html

  • Rebuild indexes on partitioned table

    Hi
    We have a very large table hh_advance which hold 1.5billion records and is partitioned by range
    we a index (non parititioned) index on this table.
    after adding partitions to hh_advance table the index has become unusable
    the size of index is too huge and we need to rebuild the index in least possible time
    what is the best used practice in this case
    thanx
    kb

    KB.. wrote:
    Hi
    We have a very large table hh_advance which hold 1.5billion records and is partitioned by range
    we a index (non parititioned) index on this table.
    after adding partitions to hh_advance table the index has become unusable
    the size of index is too huge and we need to rebuild the index in least possible time
    what is the best used practice in this casekb,
    you haven't mentioned your database version, but the best practice would be in many cases to use the "UPDATE \[GLOBAL\] INDEXES" clause of the ALTER TABLE command if you're at least on 9i or later to prevent any (global) indexes from becoming unusable.
    By the way, adding a partition to a range partitioned table shouldn't invalidate neither a global nor a local index.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to recover corrupted partition table?

    I have a disk that somehow got the partition table corrupted. I am getting lots of "Bad Geometry" errors that state the label says one size while the drive says something different.
    I have tried running the TestDisk (http://www.cgsecurity.org) application, and it seems to find the partitions just fine and I believe that my data is in tact. I just need to correct the partition table and access the UFS file system. The TestDisk reports that "write_part_sun" is not implemented, so it cannot correct the problem. I have checked the source code for the current and beta versions of the program and the function still has not been written.
    Is there another application that I can use to rescue my corrupted partition table? I am not finding much support for Solaris out on the net.
    Thanks.

    Is there another application that I can use to rescue my corrupted partition table? I am not finding much support for Solaris out on the net.Hi
    There is some, but you have to look hard.
    I wrote a partition table editor:
    http://paulf.free.fr/pfdisk.html
    If your LBA values are good, you should be able to restore the CHS values from them. If both the LBA and CHS values are corrupt, and you have a good idea of the sizes of the partitions, then with a calculator you should be able to work out what the values should have been.
    Paul

  • Problem in partitioning table

    Hi, I'm geeting an error while i execute the following query.
    Create Table Sample_Orders
    (Ord_No Number,
    Order_Date Date)
    Partition By Range(Order_Date)
    Partition SO99Q1 Values Less Than To_Date(‘01-APR-1999’, ‘DD-MON-YYYY’),
    Partition SO99Q2 Values Less Than To_Date(‘01-JUL-1999’, ‘DD-MON-YYYY’));
    Partition SO99Q1 Values Less Than To_Date(‘01-APR-1999’, ‘DD-MON-YYYY’)),
    ORA-00911: invalid character
    Thanks in advance

    If only it was easy to find this type of information on the internet through searching.
    http://www.psoug.org/reference/partitions.html
    your syntax is wrong, i can tell you where, but i think you'll learn more by reading than by having me tell you.

  • [SOLVED] Corrupt partition table

    A couple days ago I was transferring large files to my 1TB external Seagate USB drive (NTFS).  It was going smooth then on one file it stopped and Thunar gave an error.
    When looking at the directory with ls -l the file's attributes were question marks (?)
    I was able to access all the other files fine, but was unable to delete or access the corrupt directory.
    Now, days later, I am unable to mount the USB drive.
    sudo fdisk -l gives this:
    Disk /dev/sdc: 931.5 GiB, 1000204885504 bytes, 1953525167 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x6e697373
    This doesn't look like a partition table. Probably you selected the wrong device.
    Device Boot Start End Blocks Id System
    /dev/sdc1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part
    /dev/sdc2 ? 1917848077 2462285169 272218546+ 73 Unknown
    /dev/sdc3 ? 1818575915 2362751050 272087568 2b Unknown
    /dev/sdc4 ? 2844524554 2844579527 27487 61 SpeedStor
    Partition table entries are not in disk order.
    I run sudo ntfsck /dev/sdc:
    $ sudo ntfsck /dev/sdc
    file record corrupted at offset 3221225472 (0xc0000000).
    Loading $MFT runlist failed. Trying $MFTMirr.
    First attribute must be after the header (0).
    and it just seems to be stuck there, I've left it running for a few hours...
    Is my disk screwed?
    *edit*: I got it! Check my other post further down this thread
    Last edited by uberscientist (2014-12-16 19:15:57)

    I fixed it finally!
    I had set the drive aside and was sad about it for a while, and decided to give another shot at googling and repair, here's what I did to get it working:
    install testdisk: https://www.archlinux.org/packages/extr … /testdisk/
    sudo testdisk
    [ Create ] (new log file)
    selected the corrupted drive
    >[Proceed]
    >[Intel ] ( By default it picked None... I just guessed Intel/PC)
    >[Analyse]
    >[Quick Search]
    >[Continue]
    >[Deeper Search]
    >Stop (press enter to stop after it finds another entry)
    >Down arrow, then right arrow to set it Primary partition (non-bootable for my case)
    >[Write ]
    Mount your drive

  • Cloned harddrive has unknown partition table

    Hi all,
    I've followed this wiki article on how to clone a full disk: https://wiki.archlinux.org/index.php/Disk_Cloning.
    Specifically:
    dd if=/dev/sdb of=/dev/sdc bs=4096 conv=notrunc,noerror
    The drive itself only has one partition that is encrypted with LUKS.
    dd finished copying everything over so naturally I wanted to verify that everything works as expected on the cloned harddrive.
    Now I've rebooted the machine to make sure that the kernel detects the partition table changes, but there's still something strange going on.
    fdisk -l can see the partition table fine, but parted cannot. Here's the output:
    $ sudo fdisk -l
    Disk /dev/sdc: 931.5 GiB, 1000194400256 bytes, 1953504688 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x36948ce8
    Device Boot Start End Blocks Id System
    /dev/sdc1 2048 1953525167 976761560 83 Linux
    $ sudo parted /dev/sdc print
    Error: Can't have a partition outside the disk!
    Model: Samsung M3 Portable (scsi)
    Disk /dev/sdc: 1000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    Strange as it is, I can open and mount the encrypted partition and see the files:
    $ sudo cryptsetup open --type luks /dev/sdc1 crypt2
    $ sudo mount -t xfs /dev/mapper/crypt2 /mnt/crypt2/
    $ ls -l /mnt/crypt2/
    I'm using XFS so I wanted to try and run a filesystem check on it. After unmounting the partition I run:
    $ sudo umount /mnt/crypt2/
    $ sudo xfs_repair -n /dev/mapper/crypt2
    Phase 1 - find and verify superblock...
    xfs_repair: error - read only 0 of 512 bytes
    Note that crypt2 is still open and shows up in fdisk -l. So XFS can't find the superblock to check the filesystem but I can somehow still mount it just fine? What am I missing here? I'm sure it's just a stupid oversight on my part, so apologies in advance...
    Oh, here's the output on xfs_repair -n  on the old harddrive:
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
    - scan filesystem freespace and inode maps...
    - found root inode chunk
    Phase 3 - for each AG...
    - scan (but don't clear) agi unlinked lists...
    - process known inodes and perform inode discovery...
    - agno = 0
    - agno = 1
    - agno = 2
    - agno = 3
    - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
    - setting up duplicate extent list...
    - check for inodes claiming duplicate blocks...
    - agno = 0
    - agno = 3
    - agno = 1
    - agno = 2
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
    - traversing filesystem ...
    - traversal finished ...
    - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

    Okay so I think I found the problem. I found this link when researching the error that parted threw at me: http://askubuntu.com/questions/48717/ho … tion-table
    And indeed, the new HDD has less sectors than the original, and the first partition ends outside of that.
    $ sudo fdisk -l
    Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x36948ce8
    Device Boot Start End Blocks Id System
    /dev/sdb1 2048 1953525167 976761560 83 Linux
    Disk /dev/sdc: 931.5 GiB, 1000194400256 bytes, 1953504688 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x36948ce8
    Device Boot Start End Blocks Id System
    /dev/sdc1 2048 1953525167 976761560 83 Linux
    Since the whole partition is encrypted, I don't want to risk simply resizing the partition. I think I'll just go ahead and make a new partition, encrypt it and copy the files manually. Unless anyone else has some input on this for me?

  • Creating Local partitioned index on Range-Partitioned table.

    Hi All,
    Database Version: Oracle 8i
    OS Platform: Solaris
    I need to create Local-Partitioned index on a column in Range-Partitioned table having 8 million records, is there any way to perform it in fastest way.
    I think we can use Nologging, Parallel, Unrecoverable options.
    But while considering Undo and Redo also mainly time required to perform this activity....Which is the best method ?
    Please guide me to perform it in fastest way and also online !!!
    -Yasser

    YasserRACDBA wrote:
    3. CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) LOCAL
    NOLOGGING PARALLEL (DEGREE 14) online;
    4. Analyze the table with cascade option.
    Do you think this is the only method to perform operation in fastest way? As table contain 8 million records and its production database.Yasser,
    if all partitions should go to the same tablespace then you don't need to specify it for each partition.
    In addition you could use the "COMPUTE STATISTICS" clause then you don't need to analyze, if you want to do it only because of the added index.
    If you want to do it separately, then analyze only the index. Of course, if you want to analyze the table, too, your approach is fine.
    So this is how the statement could look like:
    CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) TABLESPACE CS_BILLING LOCAL NOLOGGING PARALLEL (DEGREE 14) ONLINE COMPUTE STATISTICS;
    If this operation exceeds particular time window....can i kill the process?...What worst will happen if i kill this process?Killing an ONLINE operation is a bit of a mess... You're already quite on the edge (parallel, online, possibly compute statistics) with this statement. The ONLINE operation creates an IOT table to record the changes to the underlying table during the build operation. All these things need to be cleaned up if the operation fails or the process dies/gets killed. This cleanup is supposed to be performed by the SMON process if I remember correctly. I remember that I once ran into trouble in 8i after such an operation failed, may be I got even an ORA-00600 when I tried to access the table afterwards.
    It's not unlikely that your 8.1.7.2 makes your worries with this kind of statement, so be prepared.
    How much time it may take? (Just to be on safer side)The time it takes to scan the whole table (if the information can't read from another index), the sorting operation, plus writing the segment, plus any wait time due to concurrent DML / locks, plus the time to process the table that holds the changes that were done to the table while building the index.
    You can try to run an EXPLAIN PLAN on your create index statement which will give you a cost indication if you're using the cost based optimizer.
    Please suggest me if any other way exists to perform in fastest way.Since you will need to sort 8 million rows, if you have sufficient memory you could bump up the SORT_AREA_SIZE for your session temporarily to sort as much as possible in RAM.
    -- Use e.g. 100000000 to allow a 100M SORT_AREA_SIZE
    ALTER SESSION SET SORT_AREA_SIZE = <something_large>;
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Partitioned table

    I have altered a table and made it into a list partitioned table. The paritition is on a column col1 which is used in the select clause while retrieving the data. Do i need to make any changes to the code? I am using 10.2g.
    If no changes then please let me know if there would be any difference in the time taken by
    SELECT * FROM partitioned_table PARTITION (p1);
    and
    SELECT * FROM partitioned_table where col1='list value correspoding to p1';
    Edited by: user637544 on 03-Aug-2009 06:51

    As "SomeoneElse" indicated no code changes are ever required for SELECT or DML.
    Should you see performance improvements? Yes. If you don't then your partitioning method is not the correct one for the statements you are writing.
    What you want to see is partition pruning.
    http://www.morganslibrary.org/reference/partitions.html#pprun

  • [SOLVED] RAID0 Array - Unknown Partition Table

    I recently reinstalled my Arch system with a RAID0 array, and I've noticed something different this time around during boot.
    When the mdadm hook is initialized it will stop the array, check it, then start it. Once it's started it will say,
    md0: unknown partition table
    And continue to the next array.
    I believe this is fine since the system doesn't stall or anything during bootup. The concern I have is when the system then tries to activate the RAID arrays. It will show,
    Activating RAID Arrays [FAIL]
    I've looked this up, http://bbs.archlinux.org/viewtopic.php?id=90415 and apparently it's nothing to be worried about. However, it didn't do that when I had my system installed not 20 minutes before (up to date).
    I've created the mdadm configuration file as such,
    rm /mnt/etc/mdadm.conf
    mdadm --examine --scan >> /mnt/etc/mdadm.conf
    Apparently, this message can be removed so it doesn't show fail on startup by commenting out the part in /etc/rc.sysinit where it assembles the RAID, lines 121-123. I'm still uneasy with it though, I'm just looking for some further insight on this subject.
    Why doesn't my md0 device have a partition table and what is the system doing that causes the assembly to fail?
    Edit - It might be important to note that I'm trying to make 3 partitions of the following size,
    /                      10GB
    /home         ~4TB
    /boot              50MB
    Thanks!
    Last edited by nerditup (2010-06-14 04:39:35)

    Update: This behaviour is normal.
    The partition table for a RAID array is not specified anywhere because the partition table is setup on the actual devices. In my case I have two partition tables set on /dev/sda and /dev/sdb, therefore /dev/md0 doesn't need any partition table.
    The code,
    #If necessary, find md devices and manually assemble RAID arrays
    if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ];
    then
    status "Activating RAID arrays" /sbin/mdadm --assemble --scan
    fi
    will check mdadm.conf for any arrays that are setup and try to assemble them. Since the arrays are assembled already by the mdadm hook, then the --assemble parameter will return an error. This is the reason for the [FAIL] message.
    Since this is understood and not an actual "error", it is safe to comment out these lines in rc.sysinit. In fact, these lines are useless and should be taken out completely.
    The reason why it didn't show up last time was because I didn't properly setup my mdadm.conf file on the previous install. This process made sure they were loaded, but if you follow the wiki and properly setup the mdadm.conf file then you don't need this line of code at all in your rc.sysinit.
    Last edited by nerditup (2010-06-14 04:39:21)

  • Local index vs global index in partitioned tables

    Hi,
    I want to know the differences between a global and a local index.
    I'm working with partitioned tables about 10 millons rows and 40 partitions.
    I know that when your table is partitioned and your index non-partitioned is possible that
    some database operations make your index unusable and you have tu rebuid it, for example
    when yo truncate a partition your global index results unusable, is there any other operation
    that make the global index unusable??
    I think that the advantage of a global index is that takes less space than a local and is easier to rebuild,
    and the advantage of a local index is that is more effective resolving a query isn't it???
    Any advice and help about local vs global index in partitioned tables will be greatly apreciatted.
    Thanks in advance

    here is the documentation -> http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#sthref2570
    In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage. When deciding what kind of partitioned index to use, you should consider the following guidelines in order:
    1. If the table partitioning column is a subset of the index keys, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 2.
    2. If the index is unique, use a global index. If this is the case, you are finished. If this is not the case, continue to guideline 3.
    3. If your priority is manageability, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 4.
    4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index.
    Kind regards,
    Tonguç

  • Insert statement does not insert all records from a partitioned table

    Hi
    I need to insert records in to a table from a partitioned table.I set up a job and to my surprise i found that the insert statement is not inserting all the records on the partitioned table.
    for example when i am using select statement on to a partitioned table
    it gives me 400 records but when i insert it gives me only 100 records.
    can anyone help in this matter.

    INSERT INTO TABLENAME(COLUMNS)
    (SELECT *
    FROM SCHEMA1.TABLENAME1
    JOIN SCHEMA2.TABLENAME2a
    ON CONDITION
    JOIN SCHEMA2.TABLENAME2 b
    ON CONDITION AND CONDITION
    WHERE CONDITION
    AND CONDITION
    AND CONDITION
    AND CONDITION
    AND (CONDITION
    HAVING SUM(COLUMN) > 0
    GROUP BY COLUMNS

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

Maybe you are looking for

  • Set up for access manager fails with this error. Pl. Help

    Bug Report Form An error has occurred while executing the application. Your browser does not support automatic mail sending. Please E-Mail to the following information Your Name Organization E-Mail Address Phone Number Comment Make sure to append the

  • Can´t add a new web site in iweb

    My web site is completely blocked, even all the options to do things in my iweb are in grey so I can´t do anything with it, please help???

  • What's going on with the Chromium browser packages ?

    They are too many ! 1 bin32-cxchromium--> An old chromium package. Do we need it ? 2 chromium-browser-{buildnumber}~-svn{date}{upstreamrev} --> it's the repackaged deb of a rebuilt svn chromium. 32bit only. 3 chromium-browser-dev --> OK. it's the sam

  • Login Error Window pops up while starting a JAVA Webstart Application

    Hello members, I am getting this Login Error window whenever I try to run a downloaded application of Java Web Start. Please help me regarding this. Since, I am not able to upload the picture here directly, I cant really show you the image of the err

  • Hi guys doubt on bapi

    hi , i was doing a report suing BAPI_MATERIAL_SAVEDATA.. Actually the input to the bapi are .. 1. header data. 2. plant data 3. plant data indicator after giving the input we r getting a error that the "season value for a material number is not found