How to dd Partition Table through First Partition

Hi; I have a flash drive which has a single NTFS partition that fills up less space than the whole drive (so there's a bunch of empty/unused space at the end of the drive). I'd like to use dd to make an image from the very beginning of the drive (to catch the partition table) through the end of the NTFS partition, ignoring the empty space after it. I tried the commands below but they didn't seem to work (a copy of the drive had partition table issues):
dd count=1 bs=512 if=/dev/sdb of=partition_table.img
dd bs=8M if=/dev/sdb1 of=sdb1.img
cat sdb1.img >> partition_table.img
rm sdb1.img
mv partition_table.img mydrive.img
Any ideas? Am I failing to capture something between the partition table and the first partition? Or is it possible my partition table is not 512 bytes?

My suggestion is to dd the MBR (dd if=/dev/XXX of=<image file> bs=512 count=1) and use ntfsclone or fsarchiver to make the image.
But if you want to use dd for the whole image:
$ fdisk -l /dev/sda
Disk /dev/sda: 251.1 GB, 251059544064 bytes, 490350672 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4d9ddc04
Device Boot Start End Blocks Id System
/dev/sda1 2048 2099199 1048576 83 Linux
/dev/sda2 * 2099200 2303999 102400 83 Linux
/dev/sda3 2304000 490350671 244023336 8e Linux LVM
To image everything through the first partition, I would (dd if=/dev/sda of=image bs=512 count=2099200). I stop at 1+ the end of the partition, because the first sector is 0, rather than 1. You can also modify the bs for better performance, but you'll have to modify the count accordingly. For a 1M bs I think you divide the count by 2048 (FYI, this is completely unrelated to the fact that the partition starts on sector 2048). But I would suggest you do the math yourself.

Similar Messages

  • Alter range partition table to Interval partitioning table.

    Hi DBA's,
    I have a very big range partitioned table.
    Recently we have upgraded our database to 11gR2 which has a feautre called interval partitioning.
    Now i want to modify that existing range partitioned table to Interval Partitioning.
    can we alter the range partitioned table to interval partitioning table?
    I googled for the syntax but i didn't find it, can any one help[ me out on this?
    Thanks.

    If you ignore the "alter session set NLS_CALENDAR=PERSIAN;" during create/alter, everything else seems to work.
    When you set the "alter session..." during inserts, the rows gets inserted into the correct partitions.
    Only thing is when you look at HIGH_VALUE, you need to convert from the default GREGORIAN to PERSIAN.

  • How to make a table to be partitioned automatically in 10.1.0.2.0 ?

    Hi
    I am using Oracle database 10.1.0.2.0.
    I created a table. I have done partition on Date column.
    Initially I created partitions for 2007 year.
    Now I should create partitions for 2008 year. So every year, I have to add partitions manually.
    Could any one help me, how can I automate this process.?
    Thank you,
    Regards,
    Gowtham sen.

    I don't know.. how much it would effect.
    In my case, I am having around 100 tables which are partitioned using date column.
    I am loading a table with 5 crore records on an average for its load frequence.
    If I use trigger, it would verify..whether the value in the coumn is new value or not.
    So I am thinking.. it would effect.
    Thank you,
    Regards,
    Gowtham Sen.

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Is it possible a partition table create a partition itself?

    Hi,
    I migrated a table to range partition table by years on production system.
    But I thought, after the new year 2011 must I add new partition again for year 2011?
    For example,
    When a new record comes for year 2011 and if there is no partition for 2011 the table should be create new partition for 2011 itself?
    Every year must I add new partition myself? this is a time consuming job.
    Yes. I know MAXVALUE but I don't want to use it. I want to be done automatically.
    regards,

    Hi,
    I haven't tried EXECUTE_IMMEDIATE. It doesn't matter because I haven't found how the partition name concatenate a variable automatically.
    DB version is 10.2.0.1.
    table script is:
    CREATE TABLE INVOICE_PART1
    ID NUMBER(10) NOT NULL,
    PREPARED DATE NOT NULL,
    TOTAL NUMBER,
    FINAL VARCHAR2(1 BYTE),
    NOTE VARCHAR2(240 BYTE),
    CREATED DATE NOT NULL,
    CREATOR VARCHAR2(8 BYTE) NOT NULL,
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    LOGGING
    PARTITION BY RANGE (CREATED)
    PARTITION INV08 VALUES LESS THAN (TO_DATE(' 2010-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV09 VALUES LESS THAN (TO_DATE(' 2010-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV10 VALUES LESS THAN (TO_DATE(' 2010-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    ENABLE ROW MOVEMENT;

  • How to insert into table through reports

    hi,
    i m working on oracle reports 10g,i made a report which fatching some data & contains some function , i want to insert values of that functions into a table & how to update it if user again run that report.
    like
    data1 data2 data3
    function1 function2
    the values of function 1 & 2 shouls be insert into a table. if that user exist in table then update it otherwise insert that values;
    thxs

    U can use SRW.DO_SQL built-in procedure . Check the reports online help from the reports builder.
    Rajesh
    FUNCTION CREATETAB RETURN BOOLEAN IS
    BEGIN
    SRW.DO_SQL('CREATE TABLE CHECK (EMPNO NUMBER NOT NULL
    PRIMARY KEY, SAL NUMBER (10,2)) PCTFREE 5 PCTUSED 75');
    RETURN(TRUE);
    EXCEPTION
    WHEN SRW.DO_SQL_FAILURE THEN
    SRW.MESSAGE(100, 'ERROR WHILE CREATING CHECK TABLE.');
    SRW.MESSAGE(50, 'REPORT WAS STOPPED BEFORE THE RUNTIME
    PARAMETER FORM.');
    RAISE
    SRW.PROGRAM_ABORT;
    END;
    Edited by: RajeshAlex on Sep 30, 2008 12:10 PM

  • How To Sort the Table Through Push Buttons (ASC & Desc)???

    Hi,
    I am facing a problem with regard to 'Push Buttons' that I have created on
    my Form. I want the 'Push Button' to sort the table with respect to a
    specific column in 'Ascending order' on first push and then in 'Descending
    order' on second push and then again in 'Ascending order' on third push and so
    on...
    please help me to achieve this
    regards,
    .

    Hello,
    You could try something like this:
    Declare
         LC$Order Varchar2(128) := Get_Block_Property( 'EMP', ORDER_BY ) ;
    Begin
         If Instr( LC$Order, 'DESC' ) > 1 Then
               Set_Block_Property( 'EMP', ORDER_BY, 'EMPNO ASC' ) ;
         Else
               Set_Block_Property( 'EMP', ORDER_BY, 'EMPNO DESC' ) ;
         End if ;
         Execute_Query ;
    End ;     Francois

  • Solaris Install overwrote MBR and first partition

    I have an ABIT mobo with VIA chipset and a Sempron 2500 CPU, and WD 120 GB SATA drive. Not top of the line, but I'm on a budget. I have the SATA partitioned for multibooting, and I use XOSL multibooter. I have 3 OS's already, and several other partitions for backup. I was planning on installing Solaris on another extended partition. Kinda slow... kinda slow... hard to see the text on the monitor. Oh, the unofficial installation instructions says I should already have the partition set up, and recommends I use Ranish Partition Manager. Ranish is my favorite PM, so I quit the install, reboot to DOS, and set up a partition with Ranish. Hmmm. Solaris isn't a file type. What to do, what to do. Later in the installation instructions, it talks about the installer partioning the hard drive, so maybe I don't have to do it here. I'l leave this partition as FAT32, maybe Solaris will convert it, or I can use it for another backup, or data, or another OS. I exit and reboot to the install. Kinda slow, kind slow... now I don't see the dark text at all. Hmmm, reboot, restart. I get past the first few screens, but then it's just blank. I kick out the disk and reboot to .... nothing. now it won't boot to the hard drive. I reboot from floppy, the hard drive isn't there. Hmmm. Back to Ranish. The first partition (FreeDOS) has no data files in it, and is the wrong size, extending into the second partition space. The second partition isn't there, and there is unused space between the first and third partitions. Exit Ranish and go into PTS Disk Editor. Luckily I wrote down the hex code for the partition table, so I fix it back the way it should be. What was Solaris doing overwriting my MBR and partition table? I hadn't gotten to that point in the install yet to set up a new partition. Reboot, still nothing. Restore the MBR (I saved that too, plus the 62 sectors following). Reboot. Still nothing. Oh yeah, my FAT from partition 1 is gone, and who knows what else. What was Solaris doing overwriting that partition? Back to Ranish. Set partition 2 (Windows 2000) as my boot partition. Reboot. Phew! At least that comes up. I'm not too happy with Solaris trashing my MBR, partition table, and first partition. Yeah, yeah yeah, it won't be too hard to restore it. I'll assume my Linux is still working, although I haven't tried it yet. I think I'll wait to install Solaris until after I have a new, clean disk, and I'll be sure to disconnect this one until after the install is complete. Sorry Sun, your once-solid reputation has bounced down a few steps on this one.

    this post on ranish.com does confirm that solaris overwrites the MBR. Please let me know what u did to overcome the problem because i want to install Solaris on a hard drive partition with XP on the other partition. Thanks,
    Utpal
    Can I boot Solaris 9?
    =====================
    Yes. You can. But you need to keep the following in mind:
    1. Don't use the 'Installation CD'. It is not required. You might have problems if you
    use this CD. Start with Software CD1.
    2.If your software installation/booting fails with ACPI error, follow the instructions at:
    http://access1.sun.com/FAQSets/Solarisx86FAQs.html (Q12)
    (I did not get this error)
    3. After installing CD1, you need to boot from the harddisk (solaris 9 would have overwritten
    your MBR � Don't worry about this now; just proceed with the installation). Once it is booted
    off the disk, after going thru few initializations again, it will prompt you to insert the second CD.
    Insert CD2 and complete the installation.
    4. Now, you can restore the RPM MBR.

  • Can I use gpt to recreate/unerase a partition table? (Rebuild the GPT/GUID partition table?)  I don't want to do FILE recovery.

    (Yes, I've  googled a bunch and read threads like this one already.)
    Can I use gpt or some other app to recreate/unerase a partition table?  That is, how can I rebuild a disk's GPT/GUID partition table?)  I don't want to do FILE recovery.
    What happened: Instead of erasing a single partition off a disk with many partitions, the entire partition table was erased (using Disk Utility, w/o deleting the underlying files).  Somehow the "Erasing a disk deletes all data on all its partitions." warning message was missed.
    I have a copy of the output of df, with the number of blocks in each partition, from just prior to the erasure, so I should be able to recreate the GPT/GUID partition table.  Editing the GPT with a hex editor is not feasible.  Simply recreating the partitions with Disk Utility will overwrite the key filesystem tables on each partition, and I don't want to do that, plus Disk Utility doesn't allow me to specify exact partition sizes anyway.
    Surely there's an app for rebuilding the partition table (other than emacs' hexl-mode!) for recreating/unerasing a partition table when the partition sizes and orders are known?  I've looked at the advertising for a bunch of recovery software and none of them clearly indicate that they will do what I want. 
    I guess I can try using gpt on a copy of the reformatted drive I've made with dd, and see what happens.  But perhaps someone knows of a tool that should do what I need, or knows if gpt is that tool or not.
    There are answers and tools that will do FILE recovery - search for files and recover the ones that aren't fragmented or deleted.  As far as I can find, they just look for files on the disk, and don't pay much, if any attention to the filesystem info or directory heirarchy, which in this case is valuable.  Of course I could send it in to DriveSavers, or the like.  But none of that seems necessary, and the scavenging file recovery apps won't do the job well,
    E.g. some are mentioned here:
    I don't want to do FILE recovery.
    Thanks for any help.
    The links in this post are to pages describing the underlined term, e.g. the man pages for df and gpt.
    dd output includes:
    Filesystem
    512-blocks 
    Used Available Capacity  Mounted on

    Aperture has the ability to work with files in their existing location. They are called "referenced masters." When you import images, you should select the "In their current location" in the "Store Files:" drop down box. Have a read of the documentation for full specifics. Unsure how you can resolve your duplication; might be some work but next time have a read of the manual first
    Information for versions is stored in the Aperture database (library file). The masters can be inside the library file itself, or they can be somewhere else.

  • Partition an Non Partition Table in 11.2.0.1

    Hi Friends,
    I am using Oracle 11.2.0.1 Oracle Database.
    I have a table with 10 Million records and it's a Non Partitioned Table.
    1) I would like to partition the table (with partition by range ) without creating new table . I should do it in the existing table itself (not sure DBMS_REDEFINITION is the only option ) (or) can i use alter table ...?
    2) Add one partition which will have data for the unspecified range.
    Please let me know the inputs on the above
    Regards,
    DB

    Hi,
    what is the advantage of using DBMS_REDEFINITION over normal method (create partition table,grant access,insert records)You can't just add a partition in a non-partitioned table. You need to recreate existing table to have it partitioned (you can't just start adding new partitions to existing non-partitioned table). Advantage of dbms_redefinition is that it is online operation to re-create an existing table and and your data always remains available during table recreation
    I would like to know how to copy the object privileges,constraints,indexes from Non Partitioned table (sales) to Partitioned table (sales_part) which i am creating. will >DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS help on this?First you need to tell us what method you are using to partition an existing table? If you are using dbms_redifiniiton, you really don't need to worry about triggers, indexex or constraints at all. Just follow any document which explains how to use dbms_redifinition. Dr. Tim has done a lot of work for dummys like us by writing documents for us. Follow this document.
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    If so can i use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS alone for copying the table dependents alone after i create partition table (or) it should be used along with >DBMS_REDEFINITION.START_REDEF_TABLE only?See above document which i mentioned.
    Salman

  • How do I make NTFS format for partition in disk utility ?

    Hi, how would I format a external hard drive partition to NTFS for windows ? There's no option in disk utility. Only MS-DOS ( FAT ) , ExFat, free space, Mac OS extended- journaled, Mac OS extended, journaled case sensitive.
    The other partition would be mac os extended journaled . Does it have something to do with partition scheme ? I'm running mountain lion .

    anonymous4a wrote:
    ok i wasn't clear,
    That would have helped.
    I want to make a backup of my windows partition and mac partition on my macbook pro retina on to ONE external hard drive.
    Ok, so you have OS X and Windows in BootCamp on your Retina MBP.
    You want to make a backup of both to one drive and this is what you should do.
    Get another powered external drive slightly larger than your internal boot drive.
    Use Disk Utility to format this external drive with a GUID partition table and two partitions each slightly larger than your MacintoshHD and BootCamp partitions
    The first external partition formatted OS X extended journaled (MacintoshHD2) and the second one exFAT (WindowsBootcamp2) The OS X parittion is first at the top.
    Download these two programs, Carbon Copy Cloner and Winclone 3.
    Use CCC to clone the OS X MacintoshHD to the MacintoshHD2, and use Winclone 3 (runs in OS X) to backup Windows BootCamp to WindowsBootCamp2.
    The OS X clone on the external drive is bootable if you hold the option key down while booting the computer.
    The Winclone 3 partition on the external drive is not bootable, it's a backup only as Windows is copy protected and will invalidate itself if it ran.
    Warnings:
    Don't use CCC to clone BootCamp, it's not designed for that and won't restore to run Windows.
    There is no software that can do both parittions at the same time and be bootable or restore properly to a different sized drive or restore the RecoveryHD partition. Some have tried with some other softwares and have problems with lost drive space.
    Both partitions have to be cloned/backed up seperatly as there is some intelligence involved and the user needs the option to adjust partition space.
    Also, backup your files in Windows (and OS X) to a regular external exFAT drive that was formatted on the oldest Windows machine your going to connect to, OS X doesn't do exFAT correctly so Windows can't read it. The exFAT partition using Winclone likely won't be readable by a Windows PC.
    Most commonly used backup methods
    .Drives, partitions, formatting w/Mac's + PC's
    Again, you can't run Windows from a external drive, it's copy protected and takes the hardware id's into that, so it's essentially worthless to use NTFS on the external WindowsBootcamp2 partition.
    All you need is a partition format that can handle over 4GB sized files, and exFAT is ideal and free to use. FAT32 can't handle 4GB+ files and is being phased out.
    http://arstechnica.com/information-technology/2013/06/review-is-microsofts-new-d ata-sharing-system-a-cross-platform-savior/

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Datapump skipping partitioned tables in the database

    I have run expdp on Oracle 10.2.0.4.0 on AIX 5.6 Platform, the export runs well exporting rows in the database but when it comes to partitioned tables in the database it export no rows for all the partitioned tables. When I run a normal exp/imp the partitioned tables are exported with all their rows.
    I used the following commands:
    expdp system/****** dumpfile=export_data.dmp directory=DATA_PUMP_DIR full=y logfile=export_dump.log
    Output for expdp on partitioned table:
    . . exported "SCOTT"."DEPT":"DEPT_2003_P1" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P10" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P11" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P12" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P2" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P3" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P4" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P5" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P6" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P7" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P8" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P9" 0 KB 0 rows
    And for exp:
    exp system/****** file=export_dump.dmp full=y log=export_log1.log
    Result from the export log for partitioned tables:
    . . exporting partition DEPT_2005_P1 881080 rows exported
    . . exporting partition DEPT_2005_P2 1347780 rows exported
    . . exporting partition DEPT_2005_P3 2002962 rows exported
    . . exporting partition DEPT_2005_P4 2318227 rows exported
    . . exporting partition DEPT_2005_P5 3122371 rows exported
    . . exporting partition DEPT_2005_P6 3916020 rows exported
    . . exporting partition DEPT_2005_P7 4217100 rows exported
    . . exporting partition DEPT_2005_P8 4125915 rows exported
    . . exporting partition DEPT_2005_P9 1913970 rows exported
    . . exporting partition DEPT_2005_P10 1100156 rows exported
    . . exporting partition DEPT_2005_P11 786516 rows exported
    . . exporting partition DEPT_2005_P12 822976 rows exported
    I am not sure about this behavour from datapump, my database is more than 800GB and we want to migrate the database from AIX to LINUX.
    Thanks

    Sorry I just copied and pasted some extracts from my exp and expdp logs:
    For testing purposes I tried to run a datapump export of only 1 partitioned table in the database and its going through, but when I do the same on a full datapump export these partitioned tables are being exported with no rows.
    Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 August, 2011 12:18:47
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** dumpfile=DEPT.dmp tables=scott.dept logfile=dept1.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 48.50 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT":"DEPT_2009_P6" 1.452 GB 7377736 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P7" 1.363 GB 6935687 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P6" 1.304 GB 6656096 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P7" 1.410 GB 7300618 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P7" 1.296 GB 6641073 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P6" 1.328 GB 6863885 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P6" 1.158 GB 6568075 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P5" 1.141 GB 5801822 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P5" 1.162 GB 6027466 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P7" 1.100 GB 6214680 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P6" 1.106 GB 5762303 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P5" 1.133 GB 5859492 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P5" 1.001 GB 5664315 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P5" 1.023 GB 5229356 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P8" 1.078 GB 5549666 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P8" 940.3 MB 5171379 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P8" 989.0 MB 4920276 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P8" 918.6 MB 4553523 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P6" 821.0 MB 5220879 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P4" 766.6 MB 3832262 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P8" 747.9 MB 4753538 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P7" 741.8 MB 4708242 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P4" 734.2 MB 3713567 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P7" 661.4 MB 4217100 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P8" 647.1 MB 4125915 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P4" 677.8 MB 3428887 rows
    I also tried to run a normal schema by schema export with the normal exp system/password command the and got my dump file which is about 300GB, when I run the imp system/password command and specify fromuser=<system > and touser=<schemas_in_the_dumpfile> seperated by commas, it just comes up with this message:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
    Import terminated successfully without warnings.
    No tables are exported.
    If I specify the parameter imp system/password file=dept_export.dmp full=y log=dept_imp.log with the same dumpfile and it imports data from the dumpfile into my database.
    I am not sure what could be wrong with my dumpfile or my imp command and its parameters.

  • Tuning SQL | Partition Table | MJC

    All good hearted people -
    Problem :- This SQL runs forever and returns nothing when STATS are stale. If I collect the table level stats (dbms_stats) on these partitioned table it runs again as normal (< 2 minutes).
    I see Merge Join cartesian in the explain plan when it runs bad. After the stats done, this MJC disappeared from the plan and things back to normal.
    Also, If convert one of those partition into a regular table(amms partition 2010-03-16 ) and join to the other partition table's (cust ) partition this works fine.
    Note : After every load we run partition level stats on these tables (not table level stats).
    My question is why am I getting MJC? How to solve this issue?
    <code>
    select aln.acct_no as acct_no, aln.as_of_dt, max(acm.appno) as appno, count( * )
    from amr.amms aln, acr.cust acm <================= both tables are range partitioned by date
    where acm.acctno = aln.acct_no
    and acm.acctno > 0
    and acm.as_of_dt = date '2010-03-16' <============ partition key on cust table < 2M rows
    and aln.as_of_dt = date '2010-03-12' < ============= partition key on amms table < 2M rows
    group by aln.acct_no, aln.as_of_dt
    having count( * ) = 1
    </code>
    Env: Oracle 10g | 10.2.0.4 | ASM | 2 node RAC | Linux x86 | Archivelog | Partition | Optimizer Choose |

    and acm.as_of_dt = date '2010-03-16'
    and aln.as_of_dt = date '2010-03-12' not valid syntax!

Maybe you are looking for

  • ITunes HD Movies

    Ok, I just purchased iTunes HD movie: Quantum of Solace just to see how it looks. Once I started to download the file, it downloaded both HD and the standard movie. Does it do this for all of the HD films? I am using a laptop and don't have 5GB of ha

  • JDeveloper deployement profile

    hello, I have a simple application which consists from a Jframe contains a Jlabel with an icon. when I create a jar file using jdeveloper tool I obtain a jar executable file but the icon label don't appear (although it appears when I launch the appli

  • Collating sequence error???

    hi guys... i am trying to execute an SQL query on a database created from another system. I open a connection to the dsn of the database and use db tools execute query vi. i get the following error right after this vi. Possible reason(s): Exception o

  • What is happening with Software Update?

    So I simply cannot download any updates without getting a networking -1001 error.

  • Should I get a 2008 MacBook Pro 2.4ghz with 4gb of ram or macbook aluminum 2008 2.4ghz 4gb of ram? And which is thinner?

    Hello everyone I would like to know if I should get the 2008 MacBook Pro non unibody 2.4ghz 4gb of ram or should I get a macbook aluminum unibody with the same specs? And does anyone know which is thinner?