Schema level with particular partition tables

Hi All,
I need to export all objects ie. schema level option but I need to export the particular partition of a table..
ie. i need EXCLUDE particular partition data for schema level back up.
Kindly suggest me how to archive the above..
Thanks & Regards
Sami
Edited by: Sami on Jul 6, 2012 4:41 PM

Hi All,
I have used the following option to export schema level with partition tables
YYYYY/********@devchn schemas=YYYYY EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART1','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART2','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART3','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART4','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART7','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST1','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST2', 'DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST3','TXN_JOURNAL:TRX_JOURN_PART1','TXN_JOURNAL:TRX_JOURN_PART2')" directory=DUMPDIR1 dumpfile=MSB_06-July-2012.dmp logfile=exp_MSB_06-July-2012.log but its not work..
Log file
. . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART3"       1.801 GB 6650371 rows
. . exported "YYYYY."."EVENT_JOURNAL"                       1.533 GB 15930785 rows
. . exported "YYYYY."."LOAN_ACCOUNT$AUD"                    1.287 GB 6339368 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART6"  1.212 GB 5860272 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART5"  1.102 GB 5363721 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST3"  1.055 GB 5530280 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART4"  929.4 MB 4513889 rows
. . exported "YYYYY."."DP_ACCT_INTEREST_HISTORY"            925.0 MB 7002553 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART2"  909.8 MB 4441940 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART3"  768.3 MB 3786671 rows
. . exported "YYYYY."."ACCOUNT$AUD"                         709.8 MB 4348526 rows
. . exported "YYYYY."."EVENT_CHARGE_JOURNAL"                663.9 MB 5303756 rows
. . exported "YYYYY."."DP_ACCT_CYCLE_STAT_HIST"             655.4 MB 4389715 rows
. . exported "YYYYY."."DP_ACCT_PERIOD_CYCLE_STAT_HIST"      569.1 MB 3733176 rows
. . exported "YYYYY."."DP_ACCT_CHARGE_CYCLE_STAT_HIST"      535.4 MB 3712447 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART7"  473.3 MB 2238240 rows
. . exported "YYYYY."."WF_WORK_ITEM_HISTORY"                474.4 MB 1887956 rows
. . exported "YYYYY."."OFFLINE_OUTBOUND_TXN_LOG"            478.4 MB   32803 rows
. . exported "YYYYY."."OPERATIONAL_SERVICE_ERROR_LOG"       291.8 MB   55333 rows
. . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART1"       350.3 MB 1352267 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_STAT"                313.6 MB  414942 rows
. . exported "YYYYY."."CORRESPONDENCE_QUEUE_BK"             295.6 MB 1816383 rows
. . exported "YYYYY."."EXT_TXN_JOURNAL"                     255.6 MB 1234290 rows
. . exported "YYYYY."."PERSONAL_CUSTOMER$AUD"               244.7 MB 1018705 rows
. . exported "YYYYY."."GL_ACCOUNT_STAT"                     228.3 MB  873915 rows
. . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART2"       228.8 MB  855180 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST1"  210.5 MB 1119932 rows
. . exported "YYYYY."."USER_ROLE_ALERT"                     180.9 MB 3059052 rows
. . exported "YYYYY."."CUSTOMER$AUD"                        172.7 MB 1005897 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST2"  162.3 MB  837967 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_SUMMARY"             160.7 MB  414942 rows
. . exported "YYYYY."."CUSTOMER_IMAGE_HISTORY"              142.0 MB   10085 rows
. . exported "YYYYY."."SYSUSER$AUD"                         137.9 MB  986069 rows
. . exported "YYYYY."."TXN_BATCH_ITEM$AUD"                  143.9 MB  893961 rows
. . exported "YYYYY."."ALERT"                               130.8 MB  652573 rows
. . exported "YYYYY."."EXT_DP_ACCOUNT_SUMMARY"              132.3 MB  336115 rows
. . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART1"  132.7 MB  681899 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST"            113.0 MB  835126 rows
. . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST"             112.3 MB  625117 rows
. . exported "YYYYY."."GL_ACCOUNT_SUMMARY"                  103.5 MB  873885 rows
. . exported "YYYYY."."DP_ACCT_INTEREST_TIER_HISTORY"       102.4 MB 1413589 rows
. . exported "YYYYY."."GL_ACCOUNT_MONTHLY_STAT"             98.52 MB  852631 rows
. . exported "YYYYY."."GL_ACCOUNT_QUARTERLY_STAT"           98.45 MB  852630 rows
. . exported "YYYYY."."GL_ACCOUNT_YEARLY_STAT"              98.47 MB  852630 rows
. . exported "YYYYY."."LN_ACCT_REPMNT_EVENT"                91.53 MB  902496 rows
. . exported "YYYYY."."WF_WORK_ITEM_CHKLST_RESP"            83.03 MB  538855 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT$AUD"                 79.53 MB  450801 rows
. . exported "YYYYY."."ACCOUNT_CHEQUE_INVENTORY"            73.38 MB  910879 rows
. . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST_TIER"        71.44 MB  670870 rows
. . exported "YYYYY."."PENDING_TXN_JOURNAL"                 44.14 MB    9578 rows
. . exported "YYYYY."."CUSTOMER"                            67.22 MB  405785 rows
. . exported "YYYYY."."TXN_BATCH_ITEM"                      66.98 MB  458442 rows
. . exported "YYYYY."."ACCOUNT"                             58.82 MB  443064 rows
. . exported "YYYYY."."ACCOUNT_CYCLIC_CHARGE"               56.00 MB  405963 rows
. . exported "YYYYY."."EXT_CUSTOMER"                        57.16 MB  321364 rows
. . exported "YYYYY."."EXT_LN_ACCT_REPMNT_EVENT"            61.50 MB  437790 rows
. . exported "YYYYY."."ORGANISATION_CUSTOMER$AUD"           49.84 MB  241014 rows
. . exported "YYYYY."."OFFLINE_ASYNCH_QUEUE"                52.14 MB    3562 rows
. . exported "YYYYY."."OPERATIONAL_SVCE_MAN_RUN_HIST"       47.91 MB  766864 rows
. . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST_TIER"       46.44 MB  670938 rows
. . exported "YYYYY."."LN_ACCT_PERIOD_CYCLE_STAT_HIST"      42.09 MB  260703 rows
. . exported "YYYYY."."ADDRESS"                             41.39 MB  411753 rows
. . exported "YYYYY."."EXT_ACCOUNT_RELATIONSHIP"            41.98 MB  305353 rowsEXCLUDE option is not working for partition tables.. But its working fine for other tables
examples
EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD'
{code}
the above part is working fine..
But.
{code}
EXCLUDE=TABLE:"IN('GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6')the above exclude option is not working.. data's from the table are exported into the dump..
Thanks & Regards
Sami.

Similar Messages

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • How do I change the partition scheme to use GUID partition Table.

    How do I change the partition scheme to use GUID partition Table so I can get Snow Leopard to download on my 10.5 disk.
    When I insert the disk it asks me to select  the disk where you want to install MAC OS X.It only give me one option the 10.5.
    when I click on it it says...
    "10.5" can't be used because it doesn't use the GUID Partition Table scheme.
    Use Disk Utility to change the partition scheme.  Select the disk, choose the Partition tab, select the Volume Scheme and then click Options. 
    I tried to do what it says and I can not find what it is saying.  This is the info about my MacBook.
    Model Name: MacBook
      Model Identifier: MacBook4,1
      Processor Name: Intel Core 2 Duo
      Processor Speed: 2.4 GHz
      Number Of Processors: 1
      Total Number Of Cores: 2
      L2 Cache: 3 MB
      Memory: 4 GB
      Bus Speed: 800 MHz
      Boot ROM Version: MB41.00C1.B00
      SMC Version (system): 1.31f0
    Thank you for your help!

    The GUID partition option is one of three possible choices (click the "Options" button in the Partition") menu - be careful to have a full backup as changing the partition scheme will force an erasure on the disk. Take a look at this Apple support article for more complete information:
    Firmware updates for Intel-based Macs require a GUID partition scheme - Apple Support
    Ignore the stuff about firmware updates and just look at the changing GUID partition scheme.
    Good luck - and don't forget about the full backup BEFORE making this sort of change.

  • SSD wear levelling with GPT partitioning

    does GPT help with SSD wear levelling? Is data written randomly to the physical drive, while the GPT table keeps track of which is what? I ask because it is recommended to have a single partition on an SSD for wear levelling. However, I would like to partition it to multi-boot several distros. It seems that GPT behaves like LVM because when I resize a gpt partition, it is done instantly (very quickly), as if data is not being physically re-allocated. On the other hand, it could be because my SSD is almost empty--very few personal files and the rest is just linux. So, this is why I asked the question. The other advantage of partitioning is that the dd command will run much faster and produce smaller image files, which allows me to do incremental backups.
    I hope this makes sense.
    Thanks

    Not exactly what I meant. Some online discussions suggest that partitioning will confine disk read/write activities to a certain region of the SSD, thus wearing it out quickly. On the other hand, other discussions suggest that wear levelling protection happens at lower level by the SSD controller which is indifferent to partitioning--as far as it is concerned, partitioning is just data organization at the OS level. So, you can partition as much as you want and the wear levelling protection will still evenly distribute writes across the entire SSD, regardless of the partitions boundaries.
    When I used to resize partitions on my HDD with MBR, it took a very long time because data had to be re-arranged. However, when I resize partitions on the SSD with GPT, it happens instantly as if the data are never re-arranged--only the partition table is modified.
    Is this due to using GPT or is it because SSD wear levelling protection does not care about partitioning, be it MBR or GPT?
    Ultimately, I want to make sure that creating 10 partitions, and using one of the partitions frequently with my main distro, is not going to wear out the SSD prematurely.
    firekage wrote:By wear leveling you mean trim? You can enable trim in fstab.
    Last edited by hmnxyz (2015-04-13 22:49:53)

  • I want to install OS X Mavericks on my MacBook Pro which is not with guid partition table

    i want to install OS X Mavericks on my MacBook Pro which had both mountain lion & windows 8.1, its not in GUID partition table format, so i couldnt install mavericks, so is there any way to change into GUID partition table format & install Mavericks  without losing Windows from my hard disk ?

    Open the App Store and upgrade iPhoto to the Mavericks version.
    iWork and iLife for Mac come free with every new Mac purchase. Existing users running Mavericks can update their apps for free from the Mac App Store℠. iWork and iLife for iOS are available for free from the App Store℠ for any new device running iOS 7, and are also available as free updates for existing users. GarageBand for Mac and iOS are free for all OS X Mavericks and iOS 7 users. Additional GarageBand instruments and sounds are available for a one-time in-app purchase of $4.99 for each platform.
    The iWork apps are free with a new iOS device since 1 SEP 2013. They are free with a new Mac since 1 OCT 2013. They are also free with the upgrade to OS X Mavericks 10.9 if you had the previous version installed when you upgraded.

  • Correcting a bad block on ext4 and with GTP partition table

    Hello,
    I ran a SMART offline test today which came back as a bad block:
    # 1  Extended offline    Completed: read failure       30%      8363         3212759896
    This is my first run-in with a bad block, and since these drives are big and relatively new, I want to be proactive and fix any problems as they arise. Here is my setup:
    * I have 2x 2TB HDDs of same make and model, with the device link being /dev/sdc and /dev/sdd. /dev/sdc is the one with the error.
    * These two disks are linked via a Linux RAID 1 array under /dev/md0 which is then mounted on /storage.
    * Both drives have only 1 partition under a GUID Partition Table (GPT)
    I've looked around to try to find info on fixing bad blocks, and I came across this: smartmontools.sourceforge.net/badblockhowto.html
    However, it seems to be out-dated and geared for tools like fdisk (which I cannot use for GPT) and filesystems ext2/3 (although, due to the backwards compatibility, I'm sure it works with ext4 as well), and a lot fo the commands gives things like "Couldn't find valid filesystem superblock."
    Can someone point me in the right direction as to how I can fix this issue?
    EDIT:
    My noob is showing. I got the commands above to work, and when I check to see which file is using the bad block it shows this (after all the calculations involved, the block was 401594731):
    debugfs:  icheck 401594731
    Block   Inode number
    401594000       <block not found>
    So i'm assuming that there isn't a file assigned to it (empty space?). But then, when I use dd to read from it, it seems to read just fine:
    sudo dd if=/dev/md0 of=my.block skip=401594731 bs=4096 count=1
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0222587 s, 184 kB/s
    I think it's able to read it since the other disk in the RAID1 array doesn't have the bad block. But I just want to make absolutely sure that there is no file assigned to that block before I nuke it. Given the above information, would it be safe to remove this block from service?
    Last edited by XtrmGmr99 (2012-01-26 01:17:51)

    Yes I think the block is not in use. You can do
    debugfs: testb 401594731
    which will state it clearly ("not in use" vs "marked in use")

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Set table level degree for partitioned table

    Hi all,
    Usually, we set degree 2 or 4 to big tables. In this case, CBO will choose parallel select for these tables if possible.
    Let assume one case that is table1 joins table2. non-partitioned Table1 has 20m rows and has degree 2. partitioned table2 has 50m rows and has no parallel degree.
    When I checked the execution plan, CBO uses parallel execution and uses PX BLOCK ITERATOR on table1 as expected. But I don't know whether table2 is selected in parallel, too.
    I mean I am not sure whether CBO launches slave processes against table2 or just select table2 as a whole.
    And with your tuning or architecture experiences, do you think whehther we should set degree for a partitioned table as the partitioned table can be parallelized based on partitions?
    best regards,
    Leon

    user12064076 wrote:
    And with your tuning or architecture experiences, do you think whether we should set degree for a partitioned table as the partitioned table can be parallelized based on partitions?What version of Oracle?
    A site I worked at recently preferred not to hard-code the degree but to let Oracle choose it at runtime; they felt it offered better allocation of system load than hard-coding the values. They were on 10g release 2.

  • EXPORT at schema level, but exclude some tables within the export

    I have been searching, but had no luck in finding the correct syntax for my situation.
    I'm simply trying to export at the schema level, but I want to omit certain tables from the export.
    exp cltest/cltest01@clprod file=exp_CLPROD092508.dmp log=exp_CLPROD092508.log statistics=none compress=N
    Thanks!

    Hi,
    Think in simple first.. you use the TABLES Clause..
    Example.
    exp scott/tiger file=empdept.expdat tables=(EMP,DEPT) log=empdept.log
    In case if you scehma contains less number of tables.. !!
    Logically if you have large number of tables, I say this solutuion might work ...all around... alternative solutions to solve the problems.. If you have hundered of tables... in your schema....
    Try to Create a New Schema and using CTAS create a tables which are skippable in the Current Scehma.
    Do an Export and once the Job Done.. you recreate the backup fom New schema
    and Import to DB (Destinaiton)
    - Pavan Kumar N

  • Partition Eliminiation on views and joins with other partitioned tables

    I have a bunch of tables that are daily partioned and other which are not. We constantly join all these tables. I have noticed that partition elimination doesn't happen in most cases and I want some input or pointers on these.
    Case 1
    We have a view that joins a couple of partitioned tables on the id fileds and they the partition key is timestamp with local time zone.
    TABLEA
    tableaid
    atime
    TABLEB
    tablebid
    tableaid
    btime
    The view basically joins on tableaid, a.tableaid = b.tableaid(+) and a bunch of other non partitioned tables. atime and btime are the individal partition keys in the tables and these time do not match up like the id's in other words there is a little bit of correlation but they can be very different.
    When I run a query against the view providing a time range for btime, I see partition elimination on tabled in the explain plan with KEY on Pstart/Pstop. But its a full tablescan on tablea. I was hoping there would be somekind of partition elimination here since its also partioned daily on the same datatype timestamp with local time zone.
    Case 2
    I have a couple of more partitioned tables
    TABELC
    tablecid
    tablebid
    ctime
    TABLED
    tabledid
    tablebid
    dtime
    As you can see these tables are joined with tablebid and the times here generally correlate to tableb's timestamp as well.
    Sub Case 1
    When I join these tables to the view and give a time range on btime, I see partition elimination happening on tableb but not on tablea or any of the other tables.
    Sub Case 2
    Then I got rid of the view, wrote a query that us similar to the view where I join on tableaid (tablea and tableb), then on tablebid (tableb, tablec and tabled) and a few other tables and execute the query with some time range on btime and I still see that partition elimination happens only on tableb.
    I thought that if other tables are also partitioned on a similark key that partition eliminition should happen? Or what am I missing here that is preventing from partition elimination on the other tables.

    Performance is of utmost importance and partition pruning is going to help in that. I guess that's what I'm trying to acheive.
    To achive partition elimination on tablec, d, etc I'm doing an outer join with btime and that seems to work. Also since most of the time after the partition elimination, I don't need a full tablescan since the period I will be querying most of the time will be small I also created a local index on id field that I'm using to join so that it can do a "TABLE ACCESS BY LOCAL INDEX ROWID" this way it should peform better than a global index since the index traversal path should be small when compared to the global index.
    Of couse I still have problem with the tablea not being pruned, since I cannot do an outer join on two fields in the same table (id and time). So might just include the time criteria again and may be increase the time range a little more when compared to what the actual user submitted to try not to miss those rows.
    Any suggestions is always welcome.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • PARTITION TABLE 이란?

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    PARTITION TABLE 이란?
    =====================
    PURPOSE
    partition table에 대한 기본 개념입니다.
    SCOPE
    8~10g Standard Edition 에서는 Partitioning Option 은 지원하지 않습니다.
    Explanation
    ORACLE 8에서 제공하는 partition table 에 대해 알아보자.
    1. Partitioned Table이란?
    partitioning 이란 큰 object 를 작고 manage 가 가능하게 분리하는 것을 의미하며,
    table 이나 index 에서만 가능하고 cluster, snapshot 은 불가능하다.
    각 partition 은 별개의 segment 에 저장되어진다.
    Oracle8에서 table 은 기본이 되는 key value 에 의해 partition 으로 분리되어진다.
    각 partition은 독립적으로 운영된다.
    예를 들어 table partition 은 DML (insert, update, delete) 문에 의한
    transaction 을 다른 partition 에 영향을 주지 않고 복구가 가능하다.
    DBA_TAB_PARTITIONS 에 각 partition 의 storage 정보 등을 갖는다.
    2. 어떻게 partitioned Table을 생성하는가 ?
    partition key(s)와 개개의 partition 에 범위를 주어 생성한다.
    이 partition 이름은 주어질 수 있으며 만일 생략되면 ORACLE 이 SYS_Pn 으로
    generate 한다.
    예제 :
    emp partition 을 EMPNO column을 partition key 로 하여 생성해 보자.
    CREATE TABLE emp
    (EMPNO NUMBER(5),
    PARTITION BY RANGE(EMPNO)(
    partition emp_p1 VALUES LESS THAN (2000),
    partition emp_p2 VALUES LESS THAN (4000),
    partition emp_p3 VALUES LESS THAN (MAXVALUE));
    select * from emp partition (emp_p3);
    ACCT_NO PERSON SALES_AMOUNT WEEK_NO
    1000 abc 10 30
    insert into emp partition (emp_p3) values (7000, 'bcd', 10, 30);
    3. partition table 관련한 dictionary 정보
    . storage parameters
    --> DBA_TAB_PARTITIONS
    . partiton table 의 upper partition bound
    --> select high_value, partition_position from sys.dba_tab_partitions
    where table_name = 'SALES';
    4. Partitioned tables의 제약점은?
    a) Datatype 제약
    Partitioned table은 LONG 이나 LONG RAW datatype을 가질 수 없다.
    또한 LOB datatypes (BLOB, CLOB, NCLOB, or BFILE), object types을 가질 수
    없다. 이 LOB type 은 V8.1부터는 가능할 것으로 기대된다.
    b) Clusters 는 partition 될 수 없다.
    c) Bitmap 제한
    bitmap 은 local partitioned table 에서만 가능하고 global indexes 로는
    불가능하다.
    d) Physical 제한
    Partitioned table은 여러 개의 database에 걸쳐 있을 수 없다.
    오직 1 instance 에서만 가능하다.
    5. Local Prefixed와 Local Non-Prefixed index란?
    Local index란 partitioned table 의 index로 이는 오로지 한 partition 의
    row들을 나타내는 ROWID 를 갖는 index 이다.
    이는 주로 partition table 의 partition key 로 사용되어진다.
    이를 equi-partitioning 이라 한다.
    Prefixed index는 partition key 에 대응하는 leading index key(s) 이다.
    Non-Prefixed index 는 leading column 이 되는 partition key 를 포함하지 않는
    index key 이다.
    6. Global index란?
    global index 는 prefix 만 제공하며 non-prefix 는 제공하지 않는다.
    global Index 는 전체 table 의 ROWID 처럼 사용되어진다.
    7. partitions을 사용하는 방법?
    Partition-Extended Table Name을 사용한다.
    즉 "schema.table PARTITION part_name" 를 사용하는데 schema 는 schema owner
    이고 table은 base table 이름이며, PARTITION 은 써도 되고 안 써도 되는 용어이고,
    partition_name은 partition 의 name 이다.
    이 partition-extented table 이름은 다음 문장에서 사용되어진다.
    INSERT
    UPDATE
    DELETE
    LOCK TABLE
    SELECT
    Q) partition 에 insert 시:
    SQL> insert into sales partition (p8) values (7000, 'bcd', 10, 30);
    Q) partition을 delete시:
    SQL> delete from sales partition (p8);
    Q) partition을 update 시:
    SQL> update sales partition (p8) set sales_amount = 20;
    Q) partition을 select 시:
    SQL> select * from sales PARTITION (Q4);
    8. partition-extended table 이름의 제약?
    . remote schema object를 포함할 수 없다.
    partition-extended table name 은 dblink 를 포함할 수 없으며, dblink 를 통해
    table 로 변환 가능한 synonym 을 포함할 수 없다.
    만일 remote partition의 사용을 원할 때에는 remote site 에서
    partitioned-extended table 이름을 사용하여 view 를 생성할 수 있다.
    . partition-extended table 이름은 PL/SQL에서 사용되지 않는다.
    partition-extended table 이름을 사용한 SQL 문은 DBMS_SQL package 를 통해
    만일 사용하고자 한다면 view 를 사용하여야 한다.
    . 오로지 base table 만 허용된다.
    partition extension 은 base table 에만 허용되고 synonyms, views, 그외 schema
    에서는 허용되지 않는다.
    9. Export/Import 시 Table-Level 과 Partition-Level 의 차이점?
    테이블 단위의 export에서는 partitioned or non-partitioned table 전체가 index
    와 그 table 에 dependent 한 다른 모든 object 가 함께 export 된다.
    즉 partitioned table 의 모든 partition 이 export 된다. (이는 direct path
    export and conventional path export에 모두 적용.)
    또한 모든 export 모드 (full, user, table) 가 테이블 단위의 export 를 support
    한다.
    partition 단위의 export에서는 사용자가 테이블의 하나 또는 그 이상의 partition
    을 export 할 수 있다.
    Full database 단위나 user mode 는 partition-level의 export 를 support 하지
    않는다. 오직 table levle 만 가능하다.
    또한 incremental export (incremental,cumulative, and complete) 가 full
    database mode 에서만 가능하기 때문에 partition-level export는 incremental
    exports를 지원하지 못한다.
    Partition-level import는 export 되어진 non-partitioned table을 import 하지
    못한다. 그러나, table-level 의 import로 non-partitioned table 로부터
    partitioned table 이 import되어진다.
    즉 partition-level import 는 export 되어진 table 이 partitioned 되어 있고
    export file 에 있을 때에만 가능하다.
    export file 의 partition name 이 valid 하지 않는 경우 import 시 경고
    message 를 발생한다.
    모든 경우 partitioned data 는 import 시 선택적으로 가능하게 export 되어 진다.
    export 나 import 시 table name 을 지정 시는
    TABLES=schema_name : tables_name : partition_name 으로 사용한다.
    Partition 단위의 export 는 table 내의 특정 partition 을 한개 또는 그 이상을
    export 가능하게 한다.
    이 때 partition name 이 주어지지 않으면 table 전체가 사용된다.
    다음은 partiotion level 의 export 예제이다.
    exp system/manager FILE = export.dmp TABLES = (scott.b:px, scott.b:py,
    mary.c, d:qb)
    이 예제에서 scott.b 는 반드시 partitioned table이고 px ,py 는 2개의
    partition 이다.
    mary.c 는 partitioned 또는 non-partitioned table 이다. 그러나 d table 은
    반드시 partitioned table 이며 qb 는 그 partioion 중의 하나이다.
    만일 table-name이나 같은 table 의 partition-name이 중복 사용되어지면
    export 는 error 를 발생한다. 예를 들어 다음 partition-level의 export 명령어는
    table sc 와 partition px 가 중복 사용되어 error 를 발생한다.
    exp system/manager FILE = export.dmp TABLES = (sc, sc:px, sc)
    10. partiton table 또는 view를 어떻게 non-partitioned table로 변환시키는가?
    table 을 변환하기 위해 dummy table 을 생성하고,
    alter table EXCHANGE PARTITION 명령어를 통해 수행한다.
    이 명령어는 매우 빨리 data dictionary 를 update 시킨다.
    SPLIT PARTITION 은 매우 큰 partition table 이나 view 를 handling 하는 데
    유용하다.
    SQL:
    1. partition을 갖는 dummy_t table 을 생성
    2. alter table EXCHANGE partition T with dummy_T
    3. drop table T
    exp/imp:
    1. export the table
    2. drop the table .
    3. partiton 을 갖는 table 을 다시 생성
    4. table data 를 import 한다.
    11. table partition을 결합하는 법?
    export/import:
    partition-level 의 export, import 를 통해 가능하다.
    1. partition data 를 갖는 temporary table을 생성한다.
    2. drop the partition to be merged
    3. insert into table (select * from temporary table)
    4. drop temp.
    그러나, table partition 을 분할하는 방법은 export, import 를 통해 불가능하다.
    Example
    Reference Document
    ------------------

    Before we go too far with this, if you manually query with TO_DATE on the variable instead of TO_CHAR on the column, does the query actually use the index?
    The TO_CHAR on the column will definitely stop Oracle from using any index on the column. If the query will use the index if you TO_DATE the variable, as I see it, you have three options. First, fix the application problem that won't let you use TO_DATE from the application. Second, change the application to call a function returning a ref cursor, get the date string as a parameter to the function, and do the TO_DATE in the function.
    Third, you could consider creating a function-based index on TO_CHAR(transaction_date, 'dd-Mon-yy'). This would be the least desirable option, particularly if you would also be selecting records based on a range of transaction_dates, since it loses a lot of information that the optimizer could use in devising an efficient query plan. It could also change your results for a range scan.
    John

  • Tablespace about partitioned table

    Guys,
    I've had some issues with the partitioned tables. Scenario is like this..
    I have had a table, which was partitioned. I had to move all the partitions of this table to a new one, which worked fine and dropped the old tablespace. But now when i try to split the partition, the table still referes to the old tablespace name at a higer level. How do i change this..
    I am refering to the USER_DATA on top, which is the default tablespace for the user.
    It looks something like :
    CREATE TABLE XCM_CU19_ASGN_DLR
      XCM_CONSUMER_PK    VARCHAR2(14 BYTE)          NOT NULL,
      COUNTRY_ISO3_C     VARCHAR2(3 BYTE)           NOT NULL,
      CUST_CUSTOMER_R    NUMBER(11)                 NOT NULL,
      CUD_DLR_CTRY_C     VARCHAR2(3 BYTE)           NOT NULL,
      CUD_DLR_C          VARCHAR2(8 BYTE)           NOT NULL,
      CUD_DLR_ROLE_C     INTEGER,
      CUD_DLR_REF_X      VARCHAR2(15 BYTE),
      CUD_DLR_ST_Y       DATE,
      CUD_DLR_END_Y      DATE,
      P_CUD_DLR_SRC_C    INTEGER,
      P_UPDATE_ID_C      VARCHAR2(8 BYTE),
      P_UPDATE_S         DATE,
      XCM_UPDATE_S       DATE,
      IBMSNAP_LOGMARKER  TIMESTAMP(6)
    TABLESPACE USER_DATA
    PCTUSED    40
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    PARTITION BY RANGE (COUNTRY_ISO3_C)
      PARTITION PRT001 VALUES LESS THAN ('CZE')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT002 VALUES LESS THAN ('DNK')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT003 VALUES LESS THAN ('FIN')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT004 VALUES LESS THAN ('GBR')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT005 VALUES LESS THAN ('GIB')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT006 VALUES LESS THAN ('ITA')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT007 VALUES LESS THAN ('USA')
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
      PARTITION PRT008 VALUES LESS THAN (MAXVALUE)
        LOGGING
        NOCOMPRESS
        TABLESPACE XCM04_DATA
        PCTUSED    40
        PCTFREE    10
        INITRANS   10
        MAXTRANS   255
        STORAGE    (
                    INITIAL          20M
                    NEXT             20M
                    MINEXTENTS       1
                    MAXEXTENTS       501
                    PCTINCREASE      0
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
    NOCACHE
    PARALLEL ( DEGREE 8 INSTANCES 1 )
    ENABLE ROW MOVEMENT;Thanks

    I've done this before but I couldn't find it quickly... hence I thought it was the MOVE TABLESPACE clause. But now I think what I did was:
    ALTER TABLE xxx MODIFY DEFAULT ATTRIBUTES TABLESPACE yyy;See
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131000
    Cheers,
    Colin

  • How to rebuild partition table

    Hi, everyone
    Two months ago I broke my partition table by trying to install windows 8. I had it repaired with "test-disk" , but the problem is that gparted now sees my disk as a whole 300Gb unpartitioned space, fdisk however recognizes the partition table. I was wondering what will happen if I press 'w' in fdisk will this format the partitions and erase everything, or will it just try to do something similar to what test-disk did? I want to do this because now I have a 50 gig partition left for windows and I want to add this to my / partition. Before the problems I used to do this kind of operations with gparted.
    Thanks

    ewaller wrote:Kind of like tearing up the map.
    That's why you zerox of the map (dd ... bs=440 count=1 or at least fdisk -l /dev/... >somefile) before you rip it up.  The latter is all I needed to save my butt when I borked a PT.  As long as you use your head, there is no reason dicking with the partition table particularly warrants a full data backup.
    That said, you can bet your @$$ I'd backup even "reaquirable" data in this situation, unless doing so was more trouble than reaquiring it would be.  "An ounce of prevention" and such.  Unique data, fugeddaboudit.  I have at least three copies of stuff I can't replace and don't want to lose.
    Last edited by alphaniner (2012-05-16 18:20:32)

  • Ask about DML Handler for Streams at the Schema level ?

    Hi all !
    I use Oracle version 10.2.0.
    I have two DB is A (at machine A, and it used as source database) and B (at machine B - destination database). Some changes from A will apply to B.
    At B, I installed oracle client to use EMC (Enterprise Manager Console) tool to generate some script, and use them to configure Streams environment, I configured Streams at the Schema level (DML and DDL) => I successed ! But I have two problems is:
    + I write a DML Handler, called "emp_dml_handler" and want set it to EMP table only. So, I must DBMS_STREAMS_ADM.ADD_TABLE_RULES ? (I configured: DBMS_STREAMS_ADM.ADD_SCHEMA_RULES) such as:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => '"HOSE"',
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_BOSCHOSE_REGRES',
    queue_name => 'apply_dest_hose',
    include_dml => true,
    include_ddl => true,
    source_database => 'DEVELOP.REGRESS.RDBMS.DEV.US.ORACLE.COM');
    END;
    and after:
    DECLARE
    emp_rule_name_dml VARCHAR2(50);
    emp_rule_name_ddl VARCHAR2(50);
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HOSE.EMP,
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_BOSCHOSE_REGRES',
    queue_name => 'apply_dest_hose',
    include_dml => true,
    include_ddl => true,
    source_database => 'DEVELOP.REGRESS.RDBMS.DEV.US.ORACLE.COM',
    dml_rule_name => emp_rule_name_dml,
    ddl_rule_name => emp_rule_name_ddl);
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'apply_dest_hose');
    END;
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'HOSE.EMP',
    object_type => 'TABLE',
    operation_name => 'UPDATE',
    error_handler => false,
    user_procedure => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name => NULL);
    END;
    ... similar for INSERT and DELETE...
    I think that I only configure streams at the schema level and exclude EMP table, am i right ?
    + At the source, EMP table have a primary key. And I configured:
    ALTER TABLE HOSE.EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ==> So, at the destination, have some works that I must configure the substitute key for EMP table ?
    Have some ideas for my problems ?
    Thanks
    Edited by: changemylife on Sep 24, 2009 10:45 PM

    If you want to discard emp from schema rule, then just add a negative rule, either on capture or apply.
    What is the purpose of :
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'apply_dest_hose');sound like you are enqueunig into 'apply_dest_hose' all the rows for this table that comes from ... 'apply_dest_hose'
    Next you declare a DML_HANDLER that is attached to nobody :
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'HOSE.EMP',
    object_type => 'TABLE',
    operation_name => 'UPDATE',
    error_handler => false,
    user_procedure => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name => NULL);           <----- nobody rules the world!
    END;the sequence of evaluation is normally :
    APPLY_PROCESS (reader)
              |
              | -->  RULE SET
                          |
                          | --> RULE .....
                          | --> RULE
                                     |
                                     | --> evaluate OK then --> exist DML_HANDLER  --> YES --> call DML_HANDLER --> on LCR.execute call coordinator
                                                                                            |
                                                                                            | NO
                                                                                            |                                                                 
                                                                                       Implicit apply (give LCR to coordinator which dispatch to one apply server)    
                                                      Since your dml_handler is attached to null apply process it will never be called by anybody and your LCR for table emp will be implicit applied by its apply process.

Maybe you are looking for

  • HTMLDB_item.Date with row selector

    Hi : In my application i am using htmldb_item.datepicker and row selector to insert the date object wise the problem wht i am facing is if suppose 10 rows are generated when i select 3rd row date and insert date picker date get populated in 2nd row s

  • Need help with Acrobat 11.0.9: odd interface and unresponsive on some computers

    Hello, We've been using Adobe Acrobat Pro for a few years now and have globally been very happy with it. Recently the quality of the product seems to have increased especially on the Mac computers. However we're having issues with the 11.0.9 update t

  • 2010 mac mini server won't sleep

    Hi , I have seen some people report that there mac mini server running Snow Leopard will not sleep . I have had this issue for a while and have now installed an App called ' PleaseSleep ' . All seems good and my Mac now goes to sleep so I thought I w

  • How to identify toll free numbers in CUCM

    Hi  I am new to this Cisco VOIP system. My Company has many toll free numbers. I need to know where can I find the toll free number details. I had a request today to alter few settings in them. But I dont know what to do and where to do.  Eg : 800326

  • Add Both as an option on protocol binding.

    Please consider adding "Both" as an option on protocol binding to allow load balancing specific ports/destinations. It's needed to do things like Bind all IP trafic to ip XXX.XXX.XXX.XXX except traffic on port 80 where you'd put the port 80 > Both ru