Re-org oracle 11.1.0.7 partitioned and sub partitioned index not validiting

Hi,
As part of maintenance i did the following thing on the oracle 11.1.0.7 on windows (no archive log mode);
     Re-orgnaized the tables in application- data tablespace - done
     Re-orgnaized large table in SYSAUX tablespace - only one table (alter table sys.WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux
alter index sys.I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST rebuild nologging;
alter index sys.I_WRI$_OPTSTAT_H_ST rebuild nologging;)
     Re-built the indexes on applicaton index tbs- done
     Re-built the partition indexes on application index tbs - done
     rebuilding of indexes in SYSAUX - done since it was invalid
     Run the analyze job –
     Re-claim the space at database level and OS level
but i have the issues of partitioned and subpartition indexes are not validating..how to validate the indexes i am using the below querires but:
ORA-14287: cannot REBUILD a partition of a Composite Range partitioned i

i am using the below sqls:
set head off
set linesize 200
select 'alter index '||index_owner||'.'||index_name||' rebuild partition '||partition_name||' ; ' from dba_ind_partitions where status <> 'VALID' and index_owner not in ('SYS','SYSTEM')
set head off
set linesize 200
select 'alter index '||index_owner||'.'||index_name||' rebuild subpartition '||subpartition_name||' ;'
from dba_ind_subpartitions
where status <> 'VALID'

Similar Messages

  • Can we addnew partition and sub partition in the existing table in one shor

    can we addnew partition and sub partition in the existing table in one short

    nav wrote:
    can we addnew partition and sub partition in the existing table in one shortYes,
    You can and below is the example for Range-List partition
    ALTER TABLE <table_name>
       ADD PARTITION <partition_name> VALUES LESS THAN (<value>
          STORAGE (INITIAL 20K NEXT 20K) TABLESPACE <TS name> NOLOGGING
              SUBPARTITION Clause
              SUBPARTITION Clause
              SUBPARTITION Clause
              SUBPARTITION Clause
               );

  • Thunderbolt partitioned and formatted hard drive not recognized on USB2/3

    I have a MBA (mid 2012) and recently purchased a Seagate Desktop Thunderbolt Adapter (STAE129), a Seagate Backup Plus 4TB USB 3.0 external drive (STCA4000100), and Apple Thunderbolt cable. The 4TB hard drive can be separated from the USB 3.0 adapter and the drive placed on the Thunderbolt adapter. I created two partitions (3TB and 1 TB) on the 4TB drive while it was mounted on the Thunderbolt adapter and formatted them Mac OS Extended (Journaled) using partition map scheme GPT. Everything works as expected, the drive hits r/w speeds in the 180 MB/s range (using Blackmagic Disk Speed Test), and no issues at all transferring data to the 3TB partition and using the 1TB partition for TimeMachine.
    However, if I disconnect the hard drive from the Thunderbolt adapter and place it on the USB 3.0 adapter and connect it to my MBA, Mountain Lion says "The disk you inserted was not readable by this computer and gives me the options to Initialize, Ignore, or Eject. Going into Disk Utility app, the drive shows up without the partitions I created when the same drive was mounted to the Thunderbolt adapter (just shows disk1s1). In fact, the drive label (Disk Description) is different too and the Partition Map Scheme now shows MBR!
    To try work-arounds, I went ahead and repeated the partitioning and formatting steps with the drive attached via USB 3.0 and all works fine until I put the drive back on the Thunderbolt adapter where once again OSX reports that the disk is not readable. I've even tried a single partition with no luck. In short, the drive partitioned and formatted on Thunderbolt is unrecognized under USB and vice versa.
    Shouldn't the disk preparation and data be consistent across these different interfaces? I would think so. My biggest concern is that if I had a failure in the Thunderbolt setup (assuming the drive itself does not fail), then I can't access my data. This is not a very comfortable situation.
    I'm assuming I've overlooked a very basic detail. Appreciate any steer to solve this problem.
    Thanks,
    Rob

    Thanks for that, I recently ran into the same problem. I even chatted with Seagate tech support and they didn't know the answer.
    My situation is a bit different from you. I bought the 3 TB Thunderbolt version of the drive directly and then purchased a USB 3 adaptor seperately.  You see, I still own a 3 year old Macbook Pro that doesn't come with a Thunderbolt port. I was looking to upgade to a MacBook Air later.  I thought that I might as well buy the TB version now.
    In anycase, my old MacBook Pro won't see the drive when I use the USB3 adaptor. I get the same exact error message as you did. I can't really test it on Thunderbolt since I don't have one.  I'm going to go install the Thunderbolt drivers now and hopefully, it will recognize the drive afterwards.  I'll let you know what happens.
    BTW, I was thinking of reformatting the drive with NTFS so that I can use the drive on PCs. My mac has the NTFS drivers loaded so it's no problem. I hope this won't screw up the Thunderbolt connection later when I do get the Macbook Air?

  • Usb flash drive partitions and view partitions in windows 7 or 8 O.S using C or C++ or C#

    I want to make 2 partitions in usb flash drive. How can a usb can be partitioned using C or C++ or C# languages? And these partitions can be able to see either in a tool or in windows o.s

    Hi,
    Your issue is out of support range of VS General Question forum which mainly discusses the usage issue of Visual Studio IDE such as
    WPF & SL designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System
    and Visual Studio Editor.
    Maybe you can try to consult on this forum:
    https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/home?forum=wdk
    Anyway, I am moving your question to the moderator forum ("Where is the forum for..?"). The owner of the forum will direct you to a right forum.
    Thanks,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Oracle SQL Developer 2.1 EA1 and 1.5.5 not working on MacOS X Snow Leopard

    Currently, the Oracle SQL Developer 2.1 EA1 and 1.5.5 are not working on MacOS X Snow Leopard. Is there a plan to support the actual version of OS X?
    When I try to start the MacOS X-Download I always get the following errors using MacOS X 10.6.1:
    ./sqldeveloper.sh oracle.classloader.util.AnnotatedNoClassDefFoundError:
         Missing class: oracle.ide.Version
         Dependent class: oracle.ide.IdeCore
         Loader: main:11.0
         Code-Source: /Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/extensions/oracle.ide.jar
         Configuration: system property PCLMain.createExtensionManagerLoader()
    The missing class is not available from any code-source or loader in the system.
         at oracle.classloader.PolicyClassLoader.handleClassNotFound (PolicyClassLoader.java:2176) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/j2ee/home/lib/pcl.jar (from system property java.class.path), by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.classloader.PolicyClassLoader.internalLoadClass (PolicyClassLoader.java:1729) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/j2ee/home/lib/pcl.jar (from system property java.class.path), by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.classloader.PolicyClassLoader.loadClass (PolicyClassLoader.java:1685) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/j2ee/home/lib/pcl.jar (from system property java.class.path), by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.classloader.PolicyClassLoader.loadClass (PolicyClassLoader.java:1670) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/j2ee/home/lib/pcl.jar (from system property java.class.path), by sun.misc.Launcher$AppClassLoader@2039559412]
         at java.lang.ClassLoader.loadClassInternal (ClassLoader.java:399) [jre bootstrap, by jre.bootstrap:1.6.0_15]
         at oracle.ide.IdeCore.$init$ (IdeCore.java:169) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/extensions/oracle.ide.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at oracle.ide.IdeCore.<init> (IdeCore.java:176) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/extensions/oracle.ide.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at oracle.ideimpl.DefaultIdeCore.<init> (DefaultIdeCore.java:64) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ideimpl.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at oracle.ideimpl.DefaultIdeCore.<init> (DefaultIdeCore.java:69) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ideimpl.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at oracle.ideimpl.Main.start (Main.java:109) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ideimpl.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at oracle.ideimpl.Main.main (Main.java:72) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ideimpl.jar (from system property PCLMain.createExtensionManagerLoader()), by main:11.0]
         at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) [unknown, by unknown]
         at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39) [unknown, by unknown]
         at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25) [unknown, by unknown]
         at java.lang.reflect.Method.invoke (Method.java:597) [unknown, by unknown]
         at oracle.ide.boot.PCLMain.callMain (PCLMain.java:66) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.PCLMain.main (PCLMain.java:58) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) [unknown, by unknown]
         at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39) [unknown, by unknown]
         at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25) [unknown, by unknown]
         at java.lang.reflect.Method.invoke (Method.java:597) [unknown, by unknown]
         at oracle.classloader.util.MainClass.invoke (MainClass.java:128) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/j2ee/home/lib/pcl.jar (from system property java.class.path), by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.IdeLauncher.bootClassLoadersAndMain (IdeLauncher.java:190) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.IdeLauncher.launchImpl (IdeLauncher.java:90) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.IdeLauncher.launch (IdeLauncher.java:66) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.IdeLauncher.main (IdeLauncher.java:55) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) [unknown, by unknown]
         at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39) [unknown, by unknown]
         at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25) [unknown, by unknown]
         at java.lang.reflect.Method.invoke (Method.java:597) [unknown, by unknown]
         at oracle.ide.boot.Launcher.invokeMain (Launcher.java:729) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.Launcher.launchImpl (Launcher.java:115) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.Launcher.launch (Launcher.java:68) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]
         at oracle.ide.boot.Launcher.main (Launcher.java:57) [Users/dus/Downloads/sqldeveloper-1.5.5.59.69-macosx/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/lib/ide-boot.jar, by sun.misc.Launcher$AppClassLoader@2039559412]

    You have started this annoying discussion.
    I don't have any problems with my OS instead of you.
    I have already told you what to do, if you have no idea how to do that just go and buy yourself another support subscription from your mac support.
    The missing class is not available from any code-source or loader in the system. check that.
    what version of JAVA do you have in your fancy OS? most of the time upgrade java to the latest version will fix the issues.
    Update java from sun website not from another mac tore.

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • Sharing the /home partition and general partition questions

    Hello, I'm new to Arch, but have been using Linux for a few years (albeit still at a beginner level).  I'm going to be reinstalling Arch on an old computer that has a 40GB main drive so dual boot a "operational" OS for day to day stuff that I want to make sure will be running well and then another OS that I can test on or just have for trying new distros.  I also have an 80GB that I'll use for data (but I don't think I want that to be my home drive). 
    My question is:  If I have two different installations of Arch, (or a second distribution) should they share the same /home partition?  My thought is "no", but I didn't know.
    Also, I'm planning on splitting the 40GB drive the following partitions.  Do these make sense, or would there be a better way to do this? 
    5GB = / (OS #1)
    14.5GB = /home (OS #1)
    5GB = / (OS #2)
    14.5GB = /home (OS #2)
    1 GB = swap (both OSes)
    I have an ancient P4 w/ 512 of RAM.

    sharing /home drives would NOT be a good option in your case simply because you are going to use the 2nd OS as test/trials. Those other OSes may have different ways of storing config files etc which may lead to having a lot of junk to parse through. and if you ever use any configs for the Test OS, and they are somewhat in conflict with Arch - in any way - you might end up having to re-configure settings for your favorite apps in Arch.
    I have a 30 GB HDD on a 10 yr old laptop which has Arch. This is the partition scheme I have
    ╔═[16:10]═[inxs @ arch]
    ╚═══===═══[~]>> df
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sda3 ext3 7.0G 1.7G 5.0G 25% /
    none tmpfs 125M 100K 125M 1% /dev
    none tmpfs 125M 0 125M 0% /dev/shm
    /dev/sda4 ext4 16G 850M 14G 6% /home
    /dev/sda6 reiserfs 5.1G 558M 4.5G 11% /var
    /dev/sda1 ext2 61M 12M 47M 20% /boot
    ╔═[21:16]═[inxs @ arch]
    ╚═══===═══[~]>> fdisk
    Disk /dev/sda: 30.0 GB, 30005821440 bytes
    255 heads, 63 sectors/track, 3648 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000080
    Device Boot Start End Blocks Id System
    /dev/sda1 1 8 64228+ 83 Linux
    /dev/sda2 9 726 5767335 5 Extended
    /dev/sda3 727 1640 7341705 83 Linux
    /dev/sda4 1641 3648 16129260 83 Linux
    /dev/sda5 9 73 522081 82 Linux swap / Solaris
    /dev/sda6 74 726 5245191 83 Linux
    ╔═[21:18]═[inxs @ arch]
    ╚═══===═══[~]>>
    Since you have 10GB more than I do, you can adjust accordingly and make partitions for your test OSes as well.
    Last edited by Inxsible (2009-10-08 01:22:30)

  • Create data partition and ubuntu partition

    I would like to create 2 more partitions on a ThinkPad x230t tablet: one partition for all my files and other partition to install ubuntu OS.
    According to Disk Management the computer already has the following partitions:
    SYSTEM_DRV              Simple, Basic, NTFS, Healthy(System, Active, Primary Partition)
    Windows7_OS(C        Simple, Basic, NTFS, Healthy(Boot, Page File, Crash Dump, Primary Partition)
    Lenovo_Recovery(Q   Simple, Basic, NTFS, Healthy(Primary Partition)
    I have read that there is a limit to the number of primary partitions allowed and that partition type and placement matter.
    Any suggestions on how to proceed are welcome.
    Thanks.

     - As I read, I guess GRUB makes the recovery button useless? I don't want to lose my licensed Vista, so what can you suggest me to do?
     I think it will work. At least if it doesn't you may reinstall Rescue and Recovery.
    After shrinking C: drive, it complains about not being able to find winload.exe and does not open
    You have to recover the loader. I used a Windows Vista bootable DVD for this purpose.

  • Disk Utility crashed during partition and DOUBLED partition use - now it's full?

    This one has me stumped.
    1TB Disk - NON-start-up - had about 450GB of data files on it.
    Split it with disk partition so I could reclaim free space.
    Disk Utility crashed during the partition.
    Relaunched Disk Utility.
    Now shows 900GB of the disk are used.
    Tried to repair disk with Disk Utility, "Can't Unmount Disk"
    Have rebooted now and am trying to find out WHAT that other 450GB of space taken up actually is. (using WhatSize).
    Anyone have any clues what the heck is going on? I now no longer have any space to duplicate the partiion files to (though on another drive completely). I can still access the files but not sure what that other space hog could be....?
    Thanks anyone.

    Techtool Pro was able to determine the directory structure was fubarred - repairing it corrected the structure to report the size correctly.

  • Is list partition and hash partition one and the same

    I am creating table with partition with the commands
    CREATE TABLE ABD (ENO NUMBER(5),CID NUMBER(3),ENAME VARCHAR2(10))
    PARTITION BY LIST (ENO)
    (PARTITION P1 VALUES (123),
    PARTITION P2 VALUES (143),
    PARTITION CLIENT_ID VALUES (746))
    ALTER TABLE ABD
    ADD PARTITION CLIENT_756 VALUES (756)
    but when i describe the table script it is showing like this
    CREATE TABLE ABD (
    ENO NUMBER (5),
    ENAME VARCHAR2 (10),
    CID NUMBER (3) )
    PARTITION BY HASH (ENO)
    PARTITIONS 4
    STORE IN ( USERS,USERS,USERS,
    USERS);
    actually i am creating list partition but it is showing hash partition why is it so?

    when i describe the table script it is showing like thisHow do you describe it, and which version are you on ?
    TEST@db102 SQL> CREATE TABLE ABD (ENO NUMBER(5),CID NUMBER(3),ENAME VARCHAR2(10))
      2  PARTITION BY LIST (ENO)
      3  (PARTITION P1 VALUES (123),
      4  PARTITION P2 VALUES (143),
      5* PARTITION CLIENT_ID VALUES (746))
    TEST@db102 SQL> /
    Table created.
    TEST@db102 SQL> ALTER TABLE ABD
      2* ADD PARTITION CLIENT_756 VALUES (756)
    TEST@db102 SQL> /
    Table altered.
    TEST@db102 SQL> select dbms_metadata.get_ddl('TABLE','ABD','TEST') from dual;
    DBMS_METADATA.GET_DDL('TABLE','ABD','TEST')
      CREATE TABLE "TEST"."ABD"
       (    "ENO" NUMBER(5,0),
            "CID" NUMBER(3,0),
            "ENAME" VARCHAR2(10)
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(
      BUFFER_POOL DEFAULT)
      TABLESPACE "USERS"
      PARTITION BY LIST ("ENO")
    (PARTITION "P1"  VALUES (123)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "P2"  VALUES (143)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "CLIENT_ID"  VALUES (746)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS ,
    PARTITION "CLIENT_756"  VALUES (756)
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS" NOCOMPRESS )
    TEST@db102 SQL>                                                                               

  • Setting up bootable mac partition and windows partition

    I have a 250GB portable harddisk (USB 2.0 and Firewire connections available). I've allocated 150GB for a bootable OSX partition using firewire which works very well.
    How can I use the remaining 100GB for backing up my Windows PC data?

    If I partition the portable harddisk as FAT32, then I cannot install OSX into that harddisk.
    MacDrive is one possibility. The other solution is to get another portable harddisk for backing up my PC data. I was hoping to use one portable harddisk instead of two.

  • Sub Partitioning does not have considerable impact Explain Plan

    Hi Guys,
    I have a table that is list - list sub partitioned on Oracle 11g - 11.2.0.3.
    The first partition is on the CIRCLE_ID column and the second sub partition is on the LOAD_DTTM.
    Now i have tried 2 queries on the database
    1 - select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM a1
    where A1.CIRCLE_ID ='AK'
    AND a1.LOAD_DTTM BETWEEN '28-MAR-2012' AND '03-APR-2012';
    2 - select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM a1
    where A1.CIRCLE_ID ='AK'
    AND to_char(a1.LOAD_DTTM) like '%MAR%'
    Ideally the 2nd query should take a much higher time than the first query.
    But the explain plan shows a difference of less than 1%.
    Can you pls provide some insights as why Oracle is not understanding that subpartitioning will be faster?
    Thanks,
    Manish

    Hi Dom
    Thanks for your reply.
    Is the first query using partition pruning? - Yes
    Have you gathered stats, etc, etc? - No. All our queries always need to access the entire table and not a particular subset and the where criteria always uses partition and sub partition columns. so we dont see the need to collect stats. Can you pls advise what stats need to be collected and what will be the advantage of those?
    Below are the output of the Explain Plan and Trace Output -
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> show parameter optimizer;
    NAME TYPE VALUE
    optimizer_capture_sql_plan_baselines boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 11.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines boolean TRUE
    SQL> show parameter db_file_multi;
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 128
    SQL> show parameter cursor_sharing
    NAME TYPE VALUE
    cursor_sharing string EXACT
    SQL> column sname format a20
    SQL> column pname foramt 20
    SP2-0158: unknown COLUMN option "foramt"
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL> select
    2 sname
    3 , pname
    4 , pval1
    5 ,pval2
    6 from sys.aux_stats$;
    from sys.aux_stats$
    ERROR at line 6:
    ORA-00942: table or view does not exist
    SQL> explain plan for
    select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM_NEW a1
    where A1.CIRCLE_ID ='KA'
    AND a1.LOAD_DTTM BETWEEN '28-MAR-2012' AND '03-APR-2012'
    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3032220315
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)
    | Time | Pstart| Pstop |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 41M| 1643M| 548M(100)
    |999:59:59 | | |
    | 1 | PARTITION LIST SINGLE | | 41M| 1643M| 548M(100)
    |999:59:59 | KEY | KEY |
    | 2 | PARTITION LIST ITERATOR| | 41M| 1643M| 548M(100)
    |999:59:59 | KEY | KEY |
    | 3 | TABLE ACCESS FULL | GSM_PRR_SUM_NEW | 41M| 1643M| 548M(100)
    |999:59:59 | KEY | KEY |
    PLAN_TABLE_OUTPUT
    Note
    - dynamic sampling used for this statement (level=2)
    14 rows selected.
    SQL> explain plan for
    select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM_NEW a1
    where A1.CIRCLE_ID ='KA'
    AND to_char(a1.LOAD_DTTM) like '%MAR%';
    2 3 4
    Explained.
    SQL> SQL> SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    PLAN_TABLE_OUTPUT
    Plan hash value: 189546713
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
    ime | Pstart| Pstop |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 62M| 2521M| 5435M(100)|99
    9:59:59 | | |
    | 1 | PARTITION LIST SINGLE| | 62M| 2521M| 5435M(100)|99
    9:59:59 | KEY | KEY |
    | 2 | PARTITION LIST ALL | | 62M| 2521M| 5435M(100)|99
    9:59:59 | 1 | 305 |
    |* 3 | TABLE ACCESS FULL | GSM_PRR_SUM_NEW | 62M| 2521M| 5435M(100)|99
    9:59:59 | KEY | KEY |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - filter(TO_CHAR(INTERNAL_FUNCTION("A1"."LOAD_DTTM")) LIKE '%MAR%')
    Note
    PLAN_TABLE_OUTPUT
    - dynamic sampling used for this statement (level=2)
    19 rows selected.
    SQL>
    SQL> SET AUTOTRACE TRACEONLY ARRAYSIZE 100
    SQL> select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM_NEW a1
    where A1.CIRCLE_ID ='KA'
    AND a1.LOAD_DTTM BETWEEN '28-MAR-2012' AND '03-APR-2012' 2 3 ;
    49637012 rows selected.
    Execution Plan
    Plan hash value: 3032220315
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)
    | Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 41M| 1643M| 546M(100)
    |999:59:59 | | |
    | 1 | PARTITION LIST SINGLE | | 41M| 1643M| 546M(100)
    |999:59:59 | KEY | KEY |
    | 2 | PARTITION LIST ITERATOR| | 41M| 1643M| 546M(100)
    |999:59:59 | KEY | KEY |
    | 3 | TABLE ACCESS FULL | GSM_PRR_SUM_NEW | 41M| 1643M| 546M(100)
    |999:59:59 | KEY | KEY |
    Note
    - dynamic sampling used for this statement (level=2)
    Statistics
    0 recursive calls
    7 db block gets
    530220 consistent gets
    33636 physical reads
    0 redo size
    644311477 bytes sent via SQL*Net to client
    5460594 bytes received via SQL*Net from client
    496372 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    49637012 rows processed
    SQL> SET AUTOTRACE TRACEONLY ARRAYSIZE 100
    SQL> select MOBILENUMBER, CLOSING_BAL from GSM_PRR_SUM_NEW a1
    where A1.CIRCLE_ID ='KA'
    AND to_char(a1.LOAD_DTTM) like '%MAR%' 2 3
    4 ;
    219166976 rows selected.
    Execution Plan
    Plan hash value: 189546713
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
    ime | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 228M| 9155M| 3552M(100)|99
    9:59:59 | | |
    | 1 | PARTITION LIST SINGLE| | 228M| 9155M| 3552M(100)|99
    9:59:59 | KEY | KEY |
    | 2 | PARTITION LIST ALL | | 228M| 9155M| 3552M(100)|99
    9:59:59 | 1 | 274 |
    |* 3 | TABLE ACCESS FULL | GSM_PRR_SUM_NEW | 228M| 9155M| 3552M(100)|99
    9:59:59 | KEY | KEY |
    Predicate Information (identified by operation id):
    3 - filter(TO_CHAR(INTERNAL_FUNCTION("A1"."LOAD_DTTM")) LIKE '%MAR%')
    Note
    - dynamic sampling used for this statement (level=2)
    Statistics
    38 recursive calls
    107 db block gets
    2667792 consistent gets
    561765 physical reads
    0 redo size
    2841422984 bytes sent via SQL*Net to client
    24108883 bytes received via SQL*Net from client
    2191671 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    219166976 rows processed
    SQL>
    Thanks,
    Manish

  • Corrupted Sub partitions

    Oracle v8.1.6.3
    The table has 36 major partitions with 32 subs in each part. When we attempt to analyze a specific partition (P2003_05)in a table with 32 sub-partitions, we receive a ORA-3113 error. We know that this is a bogus error because we can analyze all the other partitions and sub-partitions without incident. This is the second time we have had a partition that will need to be exported, truncated and imported before it can be analyzed. I have absolutely no idea what causes this. Any ideas?

    This forum is for posting feedback about the OTN site.
    The best place for your question is probably a Database forum, perhaps the General Database Discussions forum.

  • VLDB Partitioning and Indexing

    Hi All,
    Here are some details about our DB.
    It is a TELCO OLAP DB.
    We are trying to run 20 batches in a month (we call it period).
    In our biggest table, we will be loading more than 300 mills rows in each individual batch.
    Thus in one month this table must have (300 Mills X 20) rows.
    We are using Datastage as ETL tool. ETL is running on 12 duo core CPU's.
    That's why we have 12 sub-partitions to the main partition to make the load purely parallel i.e. each node will be loading to individual sub-partition.
    We are planning to partition all the Fact and Summary tables by
    Period Key And sub-partition by Account_Key (Hash partition) (12 sub partitions).
    1.     Partitioning and sub-partitioning impact on database maintenance.
    2.     Impact on reporting i.e. how efficient will be the query if it is spread over multiple partitions and sub partitions.
    3.     Re-Indexing.
    As we know that SqlLoader will load the data and then it tries to reindex it. First few load will be quicker as partition is empty but how will it react at the end of the month.
    There will be around 20 loads in one period (one period = one month).
    Our biggest fact table is expected to have at least 300 mills rows in each load
    And 16 Indexes.
    Can we have some idea about how much time it may take to reindex this table?
    Are there any other better ways to do it so that it is efficient for both load and query.
    Thanks and regards,
    Munish

    Yes, sort of... hooked to a Linksys that's hooked to an AEBS, which serves both Airport and Airport Extreme as well as 3rd party B & G Wireless.
    Unfortunately I consider it a real drawback that only one Computer can have Write Priviledges at a time, all the others can only Read. It can be switched, but I ended up using the USB Connection and just Sharing it as an external drive. Ethernet 10/100 only had 10MB/Sec writes, while USB2 had 20MB/Sec writes. Shared FW Drives get as little as 26MB/Sec... Might be that I have 802.11b devices sharing the AEBS, dunno.
    Also, it requires Drivers, and though they've been good from 10.2.x through 10.4.8, you just never know when like 10.5.x or later might not work!
    Personally, I'd only buy a Gigabit Ethernet NAS/NDAS Drive... if I bought one at all, but truthfully, Firewire Drives and/or eSATA Drives Shared over the Network, and using Tri-Backup is what works best for me.

Maybe you are looking for