Correcting a bad block on ext4 and with GTP partition table

Hello,
I ran a SMART offline test today which came back as a bad block:
# 1  Extended offline    Completed: read failure       30%      8363         3212759896
This is my first run-in with a bad block, and since these drives are big and relatively new, I want to be proactive and fix any problems as they arise. Here is my setup:
* I have 2x 2TB HDDs of same make and model, with the device link being /dev/sdc and /dev/sdd. /dev/sdc is the one with the error.
* These two disks are linked via a Linux RAID 1 array under /dev/md0 which is then mounted on /storage.
* Both drives have only 1 partition under a GUID Partition Table (GPT)
I've looked around to try to find info on fixing bad blocks, and I came across this: smartmontools.sourceforge.net/badblockhowto.html
However, it seems to be out-dated and geared for tools like fdisk (which I cannot use for GPT) and filesystems ext2/3 (although, due to the backwards compatibility, I'm sure it works with ext4 as well), and a lot fo the commands gives things like "Couldn't find valid filesystem superblock."
Can someone point me in the right direction as to how I can fix this issue?
EDIT:
My noob is showing. I got the commands above to work, and when I check to see which file is using the bad block it shows this (after all the calculations involved, the block was 401594731):
debugfs:  icheck 401594731
Block   Inode number
401594000       <block not found>
So i'm assuming that there isn't a file assigned to it (empty space?). But then, when I use dd to read from it, it seems to read just fine:
sudo dd if=/dev/md0 of=my.block skip=401594731 bs=4096 count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0222587 s, 184 kB/s
I think it's able to read it since the other disk in the RAID1 array doesn't have the bad block. But I just want to make absolutely sure that there is no file assigned to that block before I nuke it. Given the above information, would it be safe to remove this block from service?
Last edited by XtrmGmr99 (2012-01-26 01:17:51)

Yes I think the block is not in use. You can do
debugfs: testb 401594731
which will state it clearly ("not in use" vs "marked in use")

Similar Messages

  • How to find block free space  and PCTUSED FOR THE TABLE

    HI all,
    Due to performance issues for my database , my management ask me to reset the PCTUSED and PCTFREE values , and my doubt is
    1)How to find the current PCTUSED and PCTFREE values.
    2)How to find block free space and PCTUSED FOR THE TABLE.
    Please help me out regarding this.
    Regards,
    Vamsi.

    1)version is 10.2.0.4
    2)output of query
    tablespace extent_management allocation_type segment_space_management
    SYSTEM     LOCAL     SYSTEM     MANUAL
    UNDOTBS1     LOCAL     SYSTEM     MANUAL
    SYSAUX     LOCAL     SYSTEM     AUTO
    TEMP     LOCAL     UNIFORM     MANUAL
    USERS     LOCAL     SYSTEM     AUTO
    UNDOTBS2     LOCAL     SYSTEM     MANUAL
    INS     LOCAL     SYSTEM     AUTO
    CONFTBS     LOCAL     SYSTEM     AUTO
    REINS     LOCAL     SYSTEM     AUTO
    ANALYST     LOCAL     SYSTEM     AUTO
    BI     LOCAL     SYSTEM     AUTO
    INTRFC     LOCAL     SYSTEM     AUTO
    COGNOS     LOCAL     SYSTEM     AUTO
    TS_INDX     LOCAL     SYSTEM     AUTO
    TS_CHOLAWEB     LOCAL     SYSTEM     AUTO
    TS_DASBOARD     LOCAL     SYSTEM     AUTO

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • I have a bad block. What and how do I save before reinitializing.

    Tech Tools tells me that I have a bad block and need to reinitialize the HD. I have been told that all file information will be erased, while programs/applications will not be harmed, so I need to back up my User Profiles (my and my wife's). I have 2 questions:
    1. What else besides my user profile should I save (or, to put it another way, what files are NOT in my user profile, and where would I find them)?
    2. Once I have reinitialized the HD (supposedly the disc utility should guide me through this), will simply dragging saved user profile (from the thumb drive) onto the HD automatically reinstall all my settings? Must I laboriously redo each program's preferences?

    In Disk Utility>Security options>Zero out data
    Boot from the install disk > select language > DON'T select Continue > instead go to Disk Utility in the MenuBar at top.
    http://support.apple.com/kb/TA24002
    Don't know if this would help, but for the Mail issue, you could try "Rebuild" in Mailbox from Menu > (select the mailbox) > Rebuild
    The AW problem might or might not be related to corruption from the bad blocks that weren't mapped out by zeroing out, or the data that was already corrupted earlier from those bad blocks. After you zero out, you could try re-installing AW. Realize, zeroing out will wipe your drive. If you do this, be ready to re-install everything.
    Look over what Limnos wrote earlier. Zeroing and re-installing, clean as much as possible may prove to be the real solution.
    Maybe the original damage was only minor, so, before you do this, you could first try repairing the drive (from Disc Utility booted from the disc, "Repair Disc.") Then, if everything is successful, when you are _booted back normally_, repair Permissions. Don't repair them booted from the disc. This might or might not resolve the problem with AW and Mail. At least, if these are the only issues you are experiencing, give this a try before taking the drastic step of a clean install, which involves a lot of work. You can always do this later if you keep uncovering new issues.
    See how this goes first and post back with the results. If you take the zero out path, I think you will need to re-format the drive.
    EDIT one more thing. You definitely don't want to zero the drive until you have some kind of clone/backup from which to move everything back, or you will lose all your data: mail, documents, music, etc.
    Message was edited by: WZZZ

  • Join 2 tables with each other and with a third table

    I'm using discoverer 10.1.2.0.2, and i have two tables that are joined together, but also each one of them is joined with a third table.
    for example i have two tables X and Y that are joined between them when X is the master and Y is the detail, but i also have a third table Z when the two tables are joined with Z and Z is the master and each one of them (in his relevant join) is the detail.
    my question is, if i build a report containing the three tables, will it be a problem?
    i mean will the data and the connection between them will be ok?
    i already built a few reports and they work fine, but i want to know if anyone knows about problems that might occur in this situation, beacause that Y is some time a details in his join to X, but in the other time it can be something else (because of the join with Z that they both have).

    I'm using discoverer 10.1.2.0.2, and i have two tables that are joined together, but also each one of them is joined with a third table.
    for example i have two tables X and Y that are joined between them when X is the master and Y is the detail, but i also have a third table Z when the two tables are joined with Z and Z is the master and each one of them (in his relevant join) is the detail.
    my question is, if i build a report containing the three tables, will it be a problem?
    i mean will the data and the connection between them will be ok?
    i already built a few reports and they work fine, but i want to know if anyone knows about problems that might occur in this situation, beacause that Y is some time a details in his join to X, but in the other time it can be something else (because of the join with Z that they both have).

  • Partition Eliminiation on views and joins with other partitioned tables

    I have a bunch of tables that are daily partioned and other which are not. We constantly join all these tables. I have noticed that partition elimination doesn't happen in most cases and I want some input or pointers on these.
    Case 1
    We have a view that joins a couple of partitioned tables on the id fileds and they the partition key is timestamp with local time zone.
    TABLEA
    tableaid
    atime
    TABLEB
    tablebid
    tableaid
    btime
    The view basically joins on tableaid, a.tableaid = b.tableaid(+) and a bunch of other non partitioned tables. atime and btime are the individal partition keys in the tables and these time do not match up like the id's in other words there is a little bit of correlation but they can be very different.
    When I run a query against the view providing a time range for btime, I see partition elimination on tabled in the explain plan with KEY on Pstart/Pstop. But its a full tablescan on tablea. I was hoping there would be somekind of partition elimination here since its also partioned daily on the same datatype timestamp with local time zone.
    Case 2
    I have a couple of more partitioned tables
    TABELC
    tablecid
    tablebid
    ctime
    TABLED
    tabledid
    tablebid
    dtime
    As you can see these tables are joined with tablebid and the times here generally correlate to tableb's timestamp as well.
    Sub Case 1
    When I join these tables to the view and give a time range on btime, I see partition elimination happening on tableb but not on tablea or any of the other tables.
    Sub Case 2
    Then I got rid of the view, wrote a query that us similar to the view where I join on tableaid (tablea and tableb), then on tablebid (tableb, tablec and tabled) and a few other tables and execute the query with some time range on btime and I still see that partition elimination happens only on tableb.
    I thought that if other tables are also partitioned on a similark key that partition eliminition should happen? Or what am I missing here that is preventing from partition elimination on the other tables.

    Performance is of utmost importance and partition pruning is going to help in that. I guess that's what I'm trying to acheive.
    To achive partition elimination on tablec, d, etc I'm doing an outer join with btime and that seems to work. Also since most of the time after the partition elimination, I don't need a full tablescan since the period I will be querying most of the time will be small I also created a local index on id field that I'm using to join so that it can do a "TABLE ACCESS BY LOCAL INDEX ROWID" this way it should peform better than a global index since the index traversal path should be small when compared to the global index.
    Of couse I still have problem with the tablea not being pruned, since I cannot do an outer join on two fields in the same table (id and time). So might just include the time criteria again and may be increase the time range a little more when compared to what the actual user submitted to try not to miss those rows.
    Any suggestions is always welcome.

  • I want to install OS X Mavericks on my MacBook Pro which is not with guid partition table

    i want to install OS X Mavericks on my MacBook Pro which had both mountain lion & windows 8.1, its not in GUID partition table format, so i couldnt install mavericks, so is there any way to change into GUID partition table format & install Mavericks  without losing Windows from my hard disk ?

    Open the App Store and upgrade iPhoto to the Mavericks version.
    iWork and iLife for Mac come free with every new Mac purchase. Existing users running Mavericks can update their apps for free from the Mac App Store℠. iWork and iLife for iOS are available for free from the App Store℠ for any new device running iOS 7, and are also available as free updates for existing users. GarageBand for Mac and iOS are free for all OS X Mavericks and iOS 7 users. Additional GarageBand instruments and sounds are available for a one-time in-app purchase of $4.99 for each platform.
    The iWork apps are free with a new iOS device since 1 SEP 2013. They are free with a new Mac since 1 OCT 2013. They are also free with the upgrade to OS X Mavericks 10.9 if you had the previous version installed when you upgraded.

  • SQL Loader, direct path and indexes on partition tables

    Hi,
    I have a big partitioned table and I have two indexes on it.
    When I use SQL Loader to upload data to my table with using OPTIONS (DIRECT=TRUE) UNRECOVERABLE , then it takes almost half hour to load just one record!!
    When I remove OPTIONS (DIRECT=TRUE), then it takes just two seconds to upload the same one record.
    Am I missing anything? Can I use direct path load on indexed partitioned tables and have reasonable load time ?
    An scheduled external job loads almost 100,000 records into this table every hour and I am trying to make the sql*loader performance it as fast as possible.
    Any help would be appreciated,
    Alan

    Hi Alan,
    How big is this table and what sort of indexes are they?
    An index update using SQL*Loader unrecoverable direct path load is achieved by an isolated sort followed by a nologging merge of the old index and the new mini-index into a new index segment (this according to seminar notes by Jonathan Lewis). This will conceivably take a long time for a large table / large index.
    Performance improvements? Are you loading all records into a new partition?
    Cheers,
    Colin

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Schema level with particular partition tables

    Hi All,
    I need to export all objects ie. schema level option but I need to export the particular partition of a table..
    ie. i need EXCLUDE particular partition data for schema level back up.
    Kindly suggest me how to archive the above..
    Thanks & Regards
    Sami
    Edited by: Sami on Jul 6, 2012 4:41 PM

    Hi All,
    I have used the following option to export schema level with partition tables
    YYYYY/********@devchn schemas=YYYYY EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART1','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART2','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART3','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART4','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART7','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST1','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST2', 'DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST3','TXN_JOURNAL:TRX_JOURN_PART1','TXN_JOURNAL:TRX_JOURN_PART2')" directory=DUMPDIR1 dumpfile=MSB_06-July-2012.dmp logfile=exp_MSB_06-July-2012.log but its not work..
    Log file
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART3"       1.801 GB 6650371 rows
    . . exported "YYYYY."."EVENT_JOURNAL"                       1.533 GB 15930785 rows
    . . exported "YYYYY."."LOAN_ACCOUNT$AUD"                    1.287 GB 6339368 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART6"  1.212 GB 5860272 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART5"  1.102 GB 5363721 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST3"  1.055 GB 5530280 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART4"  929.4 MB 4513889 rows
    . . exported "YYYYY."."DP_ACCT_INTEREST_HISTORY"            925.0 MB 7002553 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART2"  909.8 MB 4441940 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART3"  768.3 MB 3786671 rows
    . . exported "YYYYY."."ACCOUNT$AUD"                         709.8 MB 4348526 rows
    . . exported "YYYYY."."EVENT_CHARGE_JOURNAL"                663.9 MB 5303756 rows
    . . exported "YYYYY."."DP_ACCT_CYCLE_STAT_HIST"             655.4 MB 4389715 rows
    . . exported "YYYYY."."DP_ACCT_PERIOD_CYCLE_STAT_HIST"      569.1 MB 3733176 rows
    . . exported "YYYYY."."DP_ACCT_CHARGE_CYCLE_STAT_HIST"      535.4 MB 3712447 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART7"  473.3 MB 2238240 rows
    . . exported "YYYYY."."WF_WORK_ITEM_HISTORY"                474.4 MB 1887956 rows
    . . exported "YYYYY."."OFFLINE_OUTBOUND_TXN_LOG"            478.4 MB   32803 rows
    . . exported "YYYYY."."OPERATIONAL_SERVICE_ERROR_LOG"       291.8 MB   55333 rows
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART1"       350.3 MB 1352267 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_STAT"                313.6 MB  414942 rows
    . . exported "YYYYY."."CORRESPONDENCE_QUEUE_BK"             295.6 MB 1816383 rows
    . . exported "YYYYY."."EXT_TXN_JOURNAL"                     255.6 MB 1234290 rows
    . . exported "YYYYY."."PERSONAL_CUSTOMER$AUD"               244.7 MB 1018705 rows
    . . exported "YYYYY."."GL_ACCOUNT_STAT"                     228.3 MB  873915 rows
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART2"       228.8 MB  855180 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST1"  210.5 MB 1119932 rows
    . . exported "YYYYY."."USER_ROLE_ALERT"                     180.9 MB 3059052 rows
    . . exported "YYYYY."."CUSTOMER$AUD"                        172.7 MB 1005897 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST2"  162.3 MB  837967 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_SUMMARY"             160.7 MB  414942 rows
    . . exported "YYYYY."."CUSTOMER_IMAGE_HISTORY"              142.0 MB   10085 rows
    . . exported "YYYYY."."SYSUSER$AUD"                         137.9 MB  986069 rows
    . . exported "YYYYY."."TXN_BATCH_ITEM$AUD"                  143.9 MB  893961 rows
    . . exported "YYYYY."."ALERT"                               130.8 MB  652573 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_SUMMARY"              132.3 MB  336115 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART1"  132.7 MB  681899 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST"            113.0 MB  835126 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST"             112.3 MB  625117 rows
    . . exported "YYYYY."."GL_ACCOUNT_SUMMARY"                  103.5 MB  873885 rows
    . . exported "YYYYY."."DP_ACCT_INTEREST_TIER_HISTORY"       102.4 MB 1413589 rows
    . . exported "YYYYY."."GL_ACCOUNT_MONTHLY_STAT"             98.52 MB  852631 rows
    . . exported "YYYYY."."GL_ACCOUNT_QUARTERLY_STAT"           98.45 MB  852630 rows
    . . exported "YYYYY."."GL_ACCOUNT_YEARLY_STAT"              98.47 MB  852630 rows
    . . exported "YYYYY."."LN_ACCT_REPMNT_EVENT"                91.53 MB  902496 rows
    . . exported "YYYYY."."WF_WORK_ITEM_CHKLST_RESP"            83.03 MB  538855 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT$AUD"                 79.53 MB  450801 rows
    . . exported "YYYYY."."ACCOUNT_CHEQUE_INVENTORY"            73.38 MB  910879 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST_TIER"        71.44 MB  670870 rows
    . . exported "YYYYY."."PENDING_TXN_JOURNAL"                 44.14 MB    9578 rows
    . . exported "YYYYY."."CUSTOMER"                            67.22 MB  405785 rows
    . . exported "YYYYY."."TXN_BATCH_ITEM"                      66.98 MB  458442 rows
    . . exported "YYYYY."."ACCOUNT"                             58.82 MB  443064 rows
    . . exported "YYYYY."."ACCOUNT_CYCLIC_CHARGE"               56.00 MB  405963 rows
    . . exported "YYYYY."."EXT_CUSTOMER"                        57.16 MB  321364 rows
    . . exported "YYYYY."."EXT_LN_ACCT_REPMNT_EVENT"            61.50 MB  437790 rows
    . . exported "YYYYY."."ORGANISATION_CUSTOMER$AUD"           49.84 MB  241014 rows
    . . exported "YYYYY."."OFFLINE_ASYNCH_QUEUE"                52.14 MB    3562 rows
    . . exported "YYYYY."."OPERATIONAL_SVCE_MAN_RUN_HIST"       47.91 MB  766864 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST_TIER"       46.44 MB  670938 rows
    . . exported "YYYYY."."LN_ACCT_PERIOD_CYCLE_STAT_HIST"      42.09 MB  260703 rows
    . . exported "YYYYY."."ADDRESS"                             41.39 MB  411753 rows
    . . exported "YYYYY."."EXT_ACCOUNT_RELATIONSHIP"            41.98 MB  305353 rowsEXCLUDE option is not working for partition tables.. But its working fine for other tables
    examples
    EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD'
    {code}
    the above part is working fine..
    But.
    {code}
    EXCLUDE=TABLE:"IN('GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6')the above exclude option is not working.. data's from the table are exported into the dump..
    Thanks & Regards
    Sami.

  • Mac pro 13'' harddisk makes noises when writing and reading data, but Techtool scanning result shows no bad blocks in the disk, that is normal?

    I bought my Mac pro 13 inch, i7 processor and 750G storage 20th November 2012. But I found when copy into or out the Mac, the harddisk makes noises, like bitting something. I thought there are bad blocks in the disk, but the result of Techtool scanning is no bad blocks in disk, and giving a passed conclusion.
    I went the apple store for testing, and the repairmen told me there are two choices available for me:1) replace the harddisk 2) back to the store i bought machine from for a new one. i think that the machine only bought few days, it had better not be disassembled, so i went back for a new one. unfortunately, the new one makes more noises than my old one. i don't know why like apple named brands notebooks have such problems.  did you undergo this experience ?

    All of the HDDs, be they in my MBPs or enclosures are barely audible.  I would say the you deserve no less.  I suggest that you do not leave until you are satisfied with a near silent HDD.
    Why you got two in a row is a puzzle, but then some people beat the odds and win the lottery.  In your case the results are not exactly positive.  All HDDs eventually fail, and some fail sooner than others.  At least you should start with a quiet one.
    Good luck.
    Ciao.

  • My Arch won't boot anymore something about bad blocks

    It was working fine. I booted it up this morning said it found some bad blocks on some of my Linux partitions. How do I scan the surface of my drives with Archlinux.?. Is that even possible with Linux?. This is my USB drive that has the bad blocks.

    Quicken2k wrote:Boot with ro 1?. I'm new to Arch and need a bit more info. Are you telling me to boot up in read only mode?. How?.
    No worries! Simply, boot your computer and when you get to GRUB, select your Arch install and hit 'e'. Now you can edit the 'kernel' line. Append 1 (one) to the end of the line. Hit enter again, and hit 'b' to boot into single user mode. Now you can login as root, and perform whatever fsck's and debugging. Be sure to report back.

  • Cd rom has bad block event 7

    I keep gettinbg an event log error CDROM has a bad block, Error 7 and error 51 paging error.  Windows 7 freezes up.  Is my cd bad?  Is there anyway to restore the data on it.?

    Hi,
    I would like to know if other CD occurs the same issue.
    Usually this error is the result of a failing CD-ROM drive, or faulty media. You can try the following methods to troubleshoot the issue.
    Reinstall the device driver
    1. Restart your computer.
    2. Keep pressing F8 and then select Safe Mode.
    3. When booting in Safe Mode, go to Device Manage.
    4. Expand the "DVD/CD-ROM drives" item.
    5. Right click your CD-ROM device and choose Uninstall.
    6. Click OK to proceed.
    7. Restart your computer. The computer will automatically detect the
    device and install the driver.
    Remove the upper filter and lower filter values from the Registry
    1. Click the Start Button, type "regedit" (without quotation marks) in the Start Search box and then press ENTER.
    2. Navigate to the following key:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E965-E325-11CE-BFC1-08002BE10318}
    Note: This is the CD-ROM/DVD global class key.
    3. Right click the {36FC9E60-C465-11CF-8056-444553540000}  entry, choose "Export", select Desktop in the Save in box and type backup in File Name. Click Save.
    Note: The backup file is on the Desktop and named backup.reg. We can simply restore the registry by double-clicking the backup.reg file.
    4. Highlight this key {4D36E965-E325-11CE-BFC1-08002BE10318}, on the right pane, delete the upperfilter and lowerfilter entries if these strings are present.
    5. Restart your computer and check if the problem disappears.
    If the problem persists after trying all of the above methods but the CD can be read on other computer, it could be CD-ROM physically damaged. You need to contact
    the manufacture to fix the hardware issue.
    Best Regards,
    Niki
    Please remember to click "Mark as Answer" on the post that helps you, and to click "Unmark as Answer" if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • How to partition tables and indexes in this scenario?

    So our situation is pretty simple. We have 3 tables.
    A, B and C
    the model is A->>B->>C
    Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> B
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.
    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualify by that.
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any advantage.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying on columns within A.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look like compared to having to work all the way up from the bottom to the top before it begins qualifying A.
    Edited by: steffi on Jan 14, 2011 11:36 PM

    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on >created_date but rarely qualify by that. Very good question. Why did you partition on it? You will never have INSERTS on these partitions, but maybe deletes and updates? The only advantage (I can think of) would be to set these partitions in a read only tablespace to ease backup... but that's a weired reason.
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in >A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at >this time.Of course. Why should they be partitioned by Create_date?
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This >often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.I would suspect full index scan. Isn't it?
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned >indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.As A is not accessed by any partition, why should C and B profit? You should look to partition by the key you are using to access. But, you are looking to tune your SQLs where the access is like '%ACCOUNT' on A. Then when there is a match. ORACLE joins via your index and nested loop (right?) to B and C.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any >advantage.Why should it. It just makes the table and indexes larger => more IO.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A >that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying >on columns within A.If the access from A to C would be .. AND A.CREATE_DATE =C.CREATE_DATE and c.key like '%what I want%' which does not qualifify for a FK ;-) then, as that could be resulting in a partition scan, you could "profit". But, I'm sure that's not your model.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look >like compared to having to work all the way up from the bottom to the top before it begins qualifying A.So you want to denormalize A,B,C and into one table? With the same access is like '%ACCOUNT' you would get a full scan on an index as before, just the objects would be larger due to redundance and harder to maintain. In the end you would have a bad and slower design.
    Maybe you explain what the problem is.
    Full index scan can not be avoided, but that can be made faster by e.g. parallel query, and then the join to B and C should be a "snip" if you just identify a small subset of rows in these tables.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

Maybe you are looking for

  • Not able to flush the data output stream in n/w prg.

    Hi, I am trying to send and receive the data using TCP/IP channel using socket programming. Following is the server side code I have Socket socket = server_socket.accept(); OutputStream output = socket.getOutputStream(); BufferedReader input = new Bu

  • Solaris starting problem

    Hello Guys, I have encounter the problem that is after installation of solaris 10 on dell poweredge1000. I am rebooting this but after rebooting solaris is not started only Grub is coming on display so how can i restart the solaris10? Edited by: 9539

  • Waat are the alternatives to using the openreach m...

    I have the latest openreach modem which my homehub is connected to. My problem is the re-synching that this modem does, it means I am dropping offline 2-3 times a day whilst it re-synchs. I've had the engineers check my line and everything is perfect

  • Temporary Files  delete problems

    thanks , I can't delete the Temporary Files this is the code: //at my servlet .............File tmpFile = File.createTempFile("000000book", ".pdf");String path = tmpFile.getAbsolutePath();tmpFile.deleteOnExit();path = path.replace('\\', '/');session.

  • Intragrated weblogicDeployment 10.3.50

    hi experts, am using jdev11.1.1.5.0 - weblogic 10.3.5.0 am not well aware of "deployment - undeployment of application" in weblogic server. so please clarify me with my requirement. my problem: say as example @ 2'0 clock: i deployed my 3 of my applic