How to logically corrupt a block

sir
i know different ways to check whether the block is corrupted.
But i can test it only if any block is corrupted .
how will u make a block corrupt . i will bw very much grateful if u can give an
example for the block corruption .

Hi Sheila M
I need to find a way to extract lists of emails from Junk mail folder and add them en masse to blocked senders, without having to add them to my blocked senders one by one.
I have had the same email address for too long to change it, but I receive huge amounts of junk mail. I have so far barred nearly 2000 senders and have 1100 sitting in my junk folder form the last week. I barred them one by one, using RULES function...I simply cannot keep doing this.
I have MacBook Pro running Mavericks, plus iPad plus iPhone 5.
The problem is aggravated by the fact that Apple has a weakness in Mail: I may bar thousands of senders on my Mac, but the filter does not sync the barred addresses to my iPad or iPhone, so if I use Mail on either, I get many thousands of emails at once and have to mark them one by one to delete. It's ridiculous.
To solve this problem, I have decided to do is block all the senders in webmail.
However I still have the same problem with extracting multiple addresses from my junk mail folder in order to copy them into blocked senders list.
Genius Bar said it can't be done. I think they're wrong. There simply as to be (a) a location on my Mac where I can find a list of all those I have already barred; (b) a way to extract email addresses from groups of received mail.
I'd be grateful for some advice, please.

Similar Messages

  • Logical corruption of block

    Dear Experts
    Can you pls help in understanding of logical block corruption in detail.
    Thanks
    Asif Husain Khan

    I wrote a small piece of note about it over my blog, you may want to read that,
    http://blog.aristadba.com/?p=109
    HTH
    Aman....

  • Diff between logical and physical block corruption

    What is the difference between Physical and Logical block corruption.
    Dbverify utility, analyze command is used to check the logical block corruption not the physical one am i correct??
    When i get
    ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
    ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
    ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
    ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
    How to conform that this a logical or physical block corruption???
    please through some light regarding this....
    kumaresh

    the following link may help u
    http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1012.htm

  • Logically corrupted blocks in standby

    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanks

    Sivok wrote:
    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanksyour primary should run in force logging mode (ALTER DATABASE FORCE LOGGING) then the object level setting is ignored for direct path operations. You can apply an incremental backup to the standby to catchup (or just recreate the standby which might be as quick depending on volumes).
    Niall Litchfield
    http://www.orawin.info/

  • How to identify corrupted block in Particular tablespace?

    Hi,
    i am getting lot of block corruption error from particular tablesapce datafile. i couldn't able to use DBVerify utility .bcz datafiles not created using .dbf format.
    how to idetify corrupted blocks?

    Detecting and Repairing Data Block Corruption
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/repair.htm(9i)
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/repair.htm(10g)
    Find more information “Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g” Metalink Note: 28814.1.
    http://dbataj.blogspot.com/2008/04/oracle-database-block-corruption.html
    http://sysdba.wordpress.com/2006/04/05/how-to-check-for-and-repair-block-corruption-with-rman-in-oracle-9i-and-oracle-10g/

  • How to overcome corrupt redo log block header

    friends, can you tell me any utilities or packages using which i can clean my redo log . Iam encountering this error during recovery of my database
    Nand

    Jafar,
    SQL> recover database using backup controlfile;
    ORA-00279: change 166327733 generated at 04/03/2008 12:06:36 needed for thread
    1
    ORA-00289: suggestion : C:\COLD\ARCHIVES\ARC17966_0616830416.001
    ORA-00280: change 166327733 for thread 1 is in sequence #17966
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    C:\STREDO09.LOG
    ORA-00283: recovery session canceled due to errors
    ORA-00354: corrupt redo log block header
    ORA-00353: log corruption near block 4044 change 166329643 time 04/03/2008
    12:46:45
    ORA-00334: archived log: 'C:\STREDO09.LOG'
    ORA-01112: media recovery not started
    The stredo09.log is the standby redo log. My production server crashed.....Unfortunately the physical standby was also down..before main crashed..(power failure). Iam not able to open the production server.

  • Questions on Logical corruption

    Hello all,
    My DB version is 10g+ - 11.2.0.3 on various different OS.  We are in process of deploying RMAN on our system and i am having a hard time on testing/get a grip around the whole logical corruption... from what i understand(please correct me if i am wrong)
    1. I can have a check logical syntax in my backup cmd(and that will check both physical and logical corruption)...But how much overhead dose it have, Seems to be anywhere from 14-20% overhead on backup time. 
    2. Leaving the maxCorrupt to default(which i beleive is 0)...if there is a physical corruption my backup will break and i should get an email/alert saying backup broke...
    3.  Would this be same for logical corruption too ??, would RMAN report logical corrution right away like physical corruption would do?  Or do i have to query v$database_block_corruption after backup is done to figure out if i have logical corruption
    4. how would one test logical corruption ?? (besides the NO_LOGGING operation, as our DB have force logging turned on)
    5. Is it a good practice to have check logical corruption in your daily backup? ( i guess i have no problems for it if DB are small, but some of our DB are close to 50TB+ and i think the check logical is going to increase the backup time significantly)
    6. If RMAN cannot repair logical corruption, then why would i want to do the check logical (besides knowing i have a problem and the end user have to fix it by reload the data...assuming its a table not index that is corrupt)..
    7. any best practices when it comes for checking logical corruption for DB in 50+ TB
    I have actually searched on here and on google, but i could not find any way to reproducing logical corrpution(maybe there is none), but i wanted to ask the community about it....
    Thank you in advance for your time. 

    General info:
    http://www.oracle.com/technetwork/database/focus-areas/availability/maa-datacorruption-bestpractices-396464.pdf
    You might want to google "fractured block" for information about it without RMAN.  You can simulate that by writing a C program to flip some bits, although technically that would be physical corruption.  Also see Dealing with Oracle Database Block Corruption in 11g | The Oracle Instructor
    One way to simulate is to use nologging operations and then try to recover (this is why force logging is used, so google corruption force logging).  Here's an example: Block corruption after RMAN restore and recovery !!! | Practical Oracl Hey, no simulate, that's for realz!
    Somewhere in the recovery docs it explains... aw, I lost my train of thought, you might get better answers with shorter questions, or one question per thread, for this kind of fora.  Oh yeah, somewhere in the docs it explains that RMAN doesn't report the error right away, because later in the recovery stream it may decide the block is newly formatted and there wasn't really a problem.
    This really is dependent on how much data is changing and how.  If you do many nologging operations or run complicated standby, you can run into this more.  There's a trade-off between verifying everything and backup windows, site requirements control everything.  That said, I've found only paranoid DBA's check enough, IT managers often say "that will never happen."  Actually, even paranoid DBA's don't check enough, the vagaries of manual labor and flaky equipment can overshadow anything.

  • Logical corruption in datafile

    what is logical corruption.
    How this can occur in datafile , is it related caused due to disk.
    how to avoid this.
    Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
    Any good reading/url is most welcomed.
    Thank You Very Much.

    user642237 wrote:
    what is logical corruption.
    How this can occur in datafile , is it related caused due to disk.
    how to avoid this.
    Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
    Any good reading/url is most welcomed.
    Thank You Very Much.What's the db version and o/s? Where did you read the term logical corruption in datafiles? AFAIK, datafiles get physically corrupted only. The logical corruption happens within the blocks , for example some index entry pointing towards a null rowid. I am not sure that I have come across any situation/reference where this corruption is mentioned for files as well. To check it, the best possible tool is RMAN which can do the job by some simple commands.
    HTH
    Aman....

  • Backup and Logical Corruption

    Hello,
    I am running a backup and checking for any logical corruption -
    RMAN> backup check logical database;
    Starting backup at 03-MAR-10
    allocated channel: ORA_SBT_TAPE_1
    channel ORA_SBT_TAPE_1: SID=135 device type=SBT_TAPE
    channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
    allocated channel: ORA_SBT_TAPE_2
    channel ORA_SBT_TAPE_2: SID=137 device type=SBT_TAPE
    channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.5.1.0
    allocated channel: ORA_SBT_TAPE_3
    channel ORA_SBT_TAPE_3: SID=138 device type=SBT_TAPE
    channel ORA_SBT_TAPE_3: Data Protection for Oracle: version 5.5.1.0
    channel ORA_SBT_TAPE_1: starting full datafile backup set
    channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
    input datafile file number=00014 name=/oracle1/data01/TESTDB/TESTDB_compress_test_01.dbf
    input datafile file number=00006 name=/oracle/TESTDB/data01/TESTDB_shau_01.dbf
    input datafile file number=00015 name=/oracle/product/11.1/dbs/ILM_TOOLKIT_IML_TEST_TAB_A.f
    channel ORA_SBT_TAPE_1: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_2: starting full datafile backup set
    channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/oracle/TESTDB/data02/TESTDB_undo_01.dbf
    input datafile file number=00013 name=/oracle/TESTDB/data01/TESTDB_roop_01.dbf
    input datafile file number=00012 name=/oracle/TESTDB/data01/TESTDB_example_01.dbf
    input datafile file number=00005 name=/oracle/TESTDB/data01/TESTDB_sysaud_tab_1m_01.dbf
    channel ORA_SBT_TAPE_2: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_3: starting full datafile backup set
    channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
    input datafile file number=00004 name=/oracle/TESTDB/data01/TESTDB_users_01.dbf
    input datafile file number=00001 name=/oracle/TESTDB/data01/TESTDB_system_01.dbf
    input datafile file number=00002 name=/oracle/TESTDB/data01/TESTDB_sysaux_01.dbf
    input datafile file number=00025 name=/oracle/export_files/TESTDB_users_02.dbf
    channel ORA_SBT_TAPE_3: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_3: finished piece 1 at 03-MAR-10
    piece handle=5ul7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:05:15
    channel ORA_SBT_TAPE_2: finished piece 1 at 03-MAR-10
    piece handle=5tl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:06:56
    channel ORA_SBT_TAPE_1: finished piece 1 at 03-MAR-10
    piece handle=5sl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:08:16
    Finished backup at 03-MAR-10
    Starting Control File and SPFILE Autobackup at 03-MAR-10
    piece handle=c-2109934325-20100303-0c comment=API Version 2.0,MMS Version 5.5.1.0
    Finished Control File and SPFILE Autobackup at 03-MAR-10
    Question: By looking at the output, how can I say that RMAN did an Logical Check for the corruption? This output looks same as a simple backup without logical corruption check. Please advice how to check about this?
    Thanks!

    hi
    I think you won't see any summary on this, only when corruption is found.
    There is also one related setting that can be incorporated here - see example:
    Example 2-25 Specifying Corruption Tolerance for Datafile Backups
    This example assumes a database that contains 5 datafiles. It uses the SET MAXCORRUPT command to indicate than no more than 1 corruption should be tolerated in each datafile. Because the CHECK LOGICAL option is specified on the BACKUP command, RMAN checks for both physical and logical corruption.
    RUN
    +{+
    SET MAXCORRUPT FOR DATAFILE 1,2,3,4,5 TO 1;
    BACKUP CHECK LOGICAL
    DATABASE;
    +}+
    use this to see clear output:
    -- Check for physical corruption of all database files.
         VALIDATE DATABASE;
    -- Check for physical and logical corruption of a tablespace.
         VALIDATE CHECK LOGICAL TABLESPACE USERS;
    eg.
    List of Datafiles
    File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
    +1 FAILED 0 3536 57600 637711+
    File Name: /disk1/oradata/prod/system01.dbf
    Block Type Blocks Failing Blocks Processed
    Data       1              41876
    Index      0              7721
    Other      0              4467

  • Ocrcheck shows "Logical corruption check failed"

    Hi, I have a strange issue, that I am not sure how to recover from...
    In a random 'ocrcheck' we found the above 'logical corruption'. In the CRS_HOME/log/nodename/client/ I found the previous ocrcheck was done a month earlier and was successful. So, something in the last month caused a logical corruption. The cluster is functioning ok currently.
    So, I tried doing an ocrdump on some backups we have and I am receiving the following error -
    #ocrdump -backupfile backup00.ocr <<< any backup I try for the past month
    PROT-306: Failed to retrieve cluster registry data
    This error occurrs even on the backup file taken just prior to the successful ocrcheck from a month earlier. The log for this ocrdump shows -
    cat ocrdump_6494.log
    Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
    2010-08-18 12:57:17.024: [ OCRDUMP][2813008768]ocrdump starts...
    2010-08-18 12:57:17.038: [  OCROSD][2813008768]utread:3: Problem reading buffer 7473000 buflen 4096 retval 0 phy_offset 15982592 retry 0
    2010-08-18 12:57:17.038: [  OCROSD][2813008768]utread:4: Problem reading the buffer errno 2 errstring No such file or directory
    2010-08-18 12:57:17.038: [  OCRRAW][2813008768]gst: Dev/Page/Block [0/3870/3927] is CORRUPT (header)
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]rbkp:2: could not read the free list
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]gst:could not read fcl page 1
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]rbkp:2: could not read the free list
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]gst:could not read fcl page 2
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]fkce:2: problem reading the tnode 131072
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]propropen: Failed in finding key comp entry [26]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failed to open key handle for key name [SYSTEM] [PROC-26: Error while accessing the physical storage]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failure when trying to traverse ROOTKEY [SYSTEM]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Exiting [status=success]...
    NOTE: an 'ocrdump' of the active ocr does work and creates the ocrdumpfile
    The corruption in the ocr seems to be two keynames pointing to the same block.
    Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
    2010-08-18 13:22:54.095: [OCRCHECK][285084544]ocrcheck starts...
    2010-08-18 13:22:55.447: [OCRCHECK][285084544]protchcheck: OCR status : total = [262120], used = [15496], avail = [246624]
    2010-08-18 13:22:55.545: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.css.diskfile2], and keyname [SYSTEM.css.diskfile1.FILENAME] point to same block_number [3928]
    2010-08-18 13:22:55.732: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.OCR.MANUALBACKUP.ITEMS.0], and keyname [SYSTEM.css.diskfile1] point to same block_number [3927]
    2010-08-18 13:23:03.159: [OCRCHECK][285084544]Exiting [status=success]...
    Since one of the keynames refers to the votedisk, that is not appearing correctly on a query -
    crsctl query css votedisk
    0. 0 /oracrsfiles/voting_disk_01
    1. 0
    2. 0 backup_20100818_103455.ocr <<<<this value changes if I issue a command that writes something to the ocr, in this case a manual backup.
    My DBA is opening an SR, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?
    Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?
    Although this is a developement cluster (two nodes) rebuilding would be a disaster ;)
    Any help or thoughts would be much appreciated!

    Hi buddy,
    My DBA is opening an SRWell.... corruption problems, no doubts that it's better work with support team
    , but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?No, that is not the idea...if Your backup is not good, it's not safe restoring it. ;)
    Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?This is for repairing nodes that were down when some kind of change on the configuration (replace OCR for example) has been executed while it was "off", so, I guess it's not Your case.
    Good Luck!
    Cerreia

  • Logical corruption found in the sysaux tablespace

    Dear All:
    We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
    we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
    Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
    Any help, advise, suggestion will be highly appreciated.

    If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
    Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
    if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Corrupting the block to continue recovery in physical standby

    Hi,
    Just like to inquire how I will be able to corrupt the block to be able to continue the recovery in the physical standby.
    DB Version: 11.1.0.7
    Database Type: Data Warehouse
    The setup we have is primary database and standby database, we are not using dataguard, and our standby setup is another physical copy of production which act as standby and being sync using script that being run from time to time to apply the archive log came from production (its not configured to sync using ARCH or LGWR and its corresponding configurations).
    Then, the standby database is not sync due to errors encountered while trying to apply the archive log, error is below:
    Fri Feb 11 05:50:59 2011
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Media Recovery Log /u01/archive/<sid>/1_50741_651679913.arch
    Fri Feb 11 05:52:06 2011
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0, kdr9ir2rst0()+326]
    Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc (incident=631460):
    ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
    Incident details in: /u01/app/oracle/diag/rdbms/<sid>/<sid>/incident/incdir_631460/<sid>pr0028085_i631460.trc
    Fri Feb 11 05:52:10 2011
    Trace dumping is performing id=[cdmp_20110211055210]
    Fri Feb 11 05:52:14 2011
    Sweep Incident[631460]: completed
    Fri Feb 11 05:52:17 2011
    Slave exiting with ORA-10562 exception
    Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc:
    ORA-10562: Error occurred while applying redo to data block (file# 36, block# 1576118)
    ORA-10564: tablespace <tablespace name>
    ORA-01110: data file 36: '/u02/oradata/<sid>/<datafile>.dbf'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 14877145
    ORA-00607: Internal error occurred while making a change to a data block
    ORA-00602: internal programming exception
    ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
    Based on the error log it seems we are hitting some bug from metalink (document id 460169.1 and 882851.1)
    my question is, the datafile # is given, block# is known too and the data object is also identified. I just verified that object is not that important, is there a way to set the block# to corrupted to be able the recovery to continue? Then I will just drop the table from production so that will also happen in standby, and the block corrupted will be gone too. Is this feasible?
    If its not, can you suggest what's next I can do so the the physical standby will be able to sync again to prod aside from rebuilding the standby?
    Please take note that I also tried to dbv the file to confirm if there is marked as corrupted and the result for that datafile is also good:
    dbv file=/u02/oradata/<sid>/<datafile>_19.dbf logfile=dbv_file_36.log blocksize=16384
    oracle@<server>:[~] $ cat dbv_file_36.log
    DBVERIFY: Release 11.1.0.7.0 - Production on Sun Feb 13 04:35:28 2011
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    DBVERIFY - Verification starting : FILE = /u02/oradata/<sid>/<datafile>_19.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 3840000
    Total Pages Processed (Data) : 700644
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 417545
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 88910
    Total Pages Processed (Seg) : 0
    Total Pages Failing (Seg) : 0
    Total Pages Empty : 2632901
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Total Pages Encrypted : 0
    Highest block SCN : 3811184883 (1.3811184883)
    Any help is really appreciated. I hope to hear feedback from you.
    Thanks

    damorgan, i understand the opinion.
    just new with the organization and just inherit a data warehouse database without rman backup. I am still setting up the rman backup thats why i can't use rman to resolve the issue, the only i have is physical standby and its not a standby that automatically sync using dataguard or standard standby setup, i am just checking solution that is applicable in the current situation

  • Reduce Logical IO [db block gets/consistent gets]

    Hi,
    Still I'm unsure about the Logical IO (db block gets + consistent gets).
    I want to reduce 'consistent gets' for this query
    SQL> set autotrace traceonly
    SQL> select * from cm_per_phone_vw;
    905 rows selected.
    Execution Plan
    Plan hash value: 524433310
    | Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |              |   868 | 38192 |     8   (0)| 00:00:01 |
    |   1 |  SORT GROUP BY NOSORT        |              |   868 | 38192 |     8   (0)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| CI_PER_PHONE |  1238 | 54472 |     8   (0)| 00:00:01 |
    |   3 |    INDEX FULL SCAN           | CM172C0      |  1238 |       |     1   (0)| 00:00:01 |
    Statistics
              8  recursive calls
              0  db block gets
            922  consistent gets
              4  physical reads
              0  redo size
          39151  bytes sent via SQL*Net to client
           1045  bytes received via SQL*Net from client
             62  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
            905  rows processedFollowing is the view it's accessing
    CREATE OR REPLACE VIEW CM_PER_PHONE_VW
    AS
    SELECT
      per_id
      , MAX(DECODE(TRIM(phone_type_cd), 'MOB', phone)) AS MOB
      , MAX(DECODE(TRIM(phone_type_cd), 'HOME', phone)) AS HOME
      , MAX(DECODE(TRIM(phone_type_cd), 'BUSN', TRIM(phone) || ' ' || TRIM(extension))) AS BUSN
      , MAX(DECODE(TRIM(phone_type_cd), 'FAX', phone)) AS FAX
      , MAX(DECODE(TRIM(phone_type_cd), 'INT', phone)) AS INT
    FROM
      ci_per_phone
    GROUP BY
      per_idI have following indexes on table ci_per_phone
    INDEX_NAME                     COLUMN_NAME                    COLUMN_POSITION
    XM172P0                        PER_ID                                       1
    XM172P0                        SEQ_NUM                                      2
    XM172S1                        PHONE                                        1
    CM172C0                        PER_ID                                       1I tried creating indexes on PER_ID and PHONE_TYPE_CD but the consistent gets reduces to 920 instead of 922.
    Just for curiosity, how can I reduce this?
    secondly, is there any explanation on 'OPERATION' break of the plan, e.g. TABLE ACCESS BY INDEX ROWID ?
    Please advice.
    Luckys.

    Further I'm having problem with another query which is a view
    CREATE OR REPLACE VIEW CM_PER_CHAR_VW
    AS
    SELECT
    /*+ full (a) */
      a.acct_id
      , MAX(DECODE(a.char_type_cd, 'ACCTYPE', a.char_val)) acct_type
      , MAX(DECODE(a.char_type_cd, 'PRVBLCYC', a.adhoc_char_val)) prev_bill_cyc
    FROM
      ci_acct_char a
    WHERE
      a.effdt =
        (SELECT
          MAX(a1.effdt)
        FROM
          ci_acct_char a1
        WHERE a1.acct_id = a.acct_id
        AND a1.char_type_cd = a.char_type_cd)
    GROUP BY
      a.acct_idI'm not able to reduce the consistent gets and even the filter appears.
    I've analyzed the table as well as the index on the table.
    cisadm@CCBDEV> select * from cm_acct_char_vw;
    2649 rows selected.
    Execution Plan
    Plan hash value: 132362271
    | Id  | Operation              | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |              |    27 |  4536 |    14   (8)| 00:00:01 |
    |   1 |  HASH GROUP BY         |              |    27 |  4536 |    14   (8)| 00:00:01 |
    |   2 |   VIEW                 |              |    27 |  4536 |    14   (8)| 00:00:01 |
    |*  3 |    FILTER              |              |       |       |            |          |
    |   4 |     HASH GROUP BY      |              |    27 |  2916 |    14   (8)| 00:00:01 |
    |   5 |      NESTED LOOPS      |              |  2686 |   283K|    13   (0)| 00:00:01 |
    |   6 |       TABLE ACCESS FULL| CI_ACCT_CHAR |  2686 |   157K|    12   (0)| 00:00:01 |
    |*  7 |       INDEX RANGE SCAN | XM064P0      |     1 |    48 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("A"."EFFDT"=MAX("A1"."EFFDT"))
       7 - access("A1"."ACCT_ID"="A"."ACCT_ID" AND
                  "A1"."CHAR_TYPE_CD"="A"."CHAR_TYPE_CD")
    Statistics
              0  recursive calls
              0  db block gets
           2754  consistent gets
              0  physical reads
              0  redo size
          76517  bytes sent via SQL*Net to client
           2321  bytes received via SQL*Net from client
            178  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           2649  rows processedhere's the tkprof
    select *
    from
    cm_acct_char_vw
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch      178      0.07       0.05          0       2754          0        2649
    total      181      0.07       0.05          0       2754          0        2649
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 63  (CISADM)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: CHOOSE
          0   HASH (GROUP BY)
          0    VIEW
          0     FILTER
          0      HASH (GROUP BY)
          0       NESTED LOOPS
          0        TABLE ACCESS   MODE: ANALYZED (FULL) OF 'CI_ACCT_CHAR'
                       (TABLE)
          0        INDEX   MODE: ANALYZED (RANGE SCAN) OF 'XM064P0'
                       (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                     179        0.00          0.00
      SQL*Net message from client                   179        0.00          0.08
    ********************************************************************************I've an similar query for another table, where there are 1110 rows, but in the explain, no filter appears in the predicate
    Predicate Information (identified by operation id):
       2 - access("P"."EFFDT"="VW_COL_1" AND "PER_ID"="P"."PER_ID" AND
                  "CHAR_TYPE_CD"="P"."CHAR_TYPE_CD")Both the queries have somewhat similar views.
    I've got 2 questions,
    Is there a way I can reduce the consistent gets( I've tried with/without HINTS),
    secondly whats the predicate access shows as 'VW_COL_1'
    please advice.

  • Block corruption에서 physial block corruption과 soft block corruption 의 차이점

    어느 분께서 저에게 block corruption에 대한 질문을 하셨는데
    그래서 솔루션을 제시해주었고 block dump를 보니 아래와 같았습니다.
    (1번덤프) 제가 잘못 진단한 block dump(soft corrupt로 판단했는데 raid5의
    디스크가 나간 physical corrupt이네요)
    *** 2007-03-06 09:54:33.103
    Start dump data blocks tsn: 6 file#: 6 minblk 102032 maxblk 102032
    buffer tsn: 6 rdba: 0x01818e90 (6/102032)
    scn: 0x056c.f689329c seq: 0x01 flg: 0x06 tail: 0xa0cf0000
    frmt: 0x02 chkval: 0x7675 type: 0x06=trans data
    Hex dump of corrupt header 2 = BROKEN
    이었습니다.
    그런데 저는 이미 이전에 physical block corruption을 경험 한 후
    아래와 같이 dump를 떠서 확인한 결과 여러문서를 찾아본 후
    이와같이 판단하였습니다.
    (2번 덤프) physical corrupt로 확인된 덤프
    - 아래의 scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
    에서 SCN이 0x0000 이고 seq값이 UB1MAXVAL-1 이 아니므로
    physical block corruption 임을 확인
    *** SESSION ID:(25.59041) 2005-12-12 11:41:13.132
    Start dump data blocks tsn: 7 file#: 8 minblk 169994 maxblk 169994
    buffer tsn: 7 rdba: 0x0202980a (8/169994)
    scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
    frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
    Block header dump: 0x0202980a
    Object id on Block? Y
    seg/obj: 0xef9 csc: 0x00.8c13d2b itc: 2 flg: O typ: 2 - INDEX
    fsl: 2 fnx: 0x2024b0f ver: 0x01
    Itl Xid Uba Flag Lck Scn/Fsc
    0x01 xid: 0x0001.052.0001888b uba: 0x00c07725.2482.02 C--- 0 scn 0x0000.083a0f80
    0x02 xid: 0x0007.038.0001a196 uba: 0x00c07574.1f28.0b ---- 232 fsc 0x0f57.00000000
    질문..
    physial block corruption과 soft block corruption 인지의 여부를 block dump를
    통해서 알고 싶은데요. 기존에 제가 잘못 알고 있는 것 같습니다.

    좀 이상한 부분이 있어서 직접 login 하여서 조사해보았습니다.
    alert 에러 코드는
    Errors in file /app/oracle/product/10.2.0/admin/TEST_T_ktrp4vpe/bdump/test_t_smon_12714.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01578: ORACLE data block corrupted (file # 2, block # 2714)
    ORA-01110: data file 2: '/test_tdata/SYSTEM04.dbf'
    상기로 메타 링크를 찾아보면...
    제목: FAQ: Physical Corruption
    문서 ID: 공지:403747.1 유형: FAQ
    (a) ORA-01578 - This error explains physical structural damage with a particular block.
    (b) ORA-08103 - This error is a logical corruption error for a particular data block.
    (c) ORA-00600 [2662] - This error is related to block corruption , and occurs due to a higher SCN than of database SCN.
    -- 좀 이상한게 physical corruption 이란 문서내에
    physical corrupt 와 logical corrupt 를 분류하는게 좀 이상한데 ㅡ_ㅡ;
    a 항목이라, physical structual damage ... 자연스럽게 physcial corrupt 로 보입니다. dump 내용으로 제가 판독이 안되고, alert 의 error code 로는
    physical corrupt 로 보이는데..
    alert 의 error code 와 dump 의 내용이 상이 할수 있는건가요 ?
    글 수정:
    darkturtle

  • RMAN-05501: RMAN-11003 ORA-00353: log corruption near block 2048 change

    Hi Gurus,
    I've posted few days ago an issue I got while recreating my Dataguard.
    The Main issue was while duplicating target from active database I got this errors during the recovery process.
    The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
    Yesterday I took one last try,
    I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
    The I got the same issue, same errors.
    I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
    and that's waht I did.
    But I was just connect to the target db.
    contents of Memory Script:
    set until scn 1638816629;
    recover
    standby
    clone database
    delete archivelog
    executing Memory Script
    executing command: SET until clause
    Starting recover at 14-MAY-13
    starting media recovery
    archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
    archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
    archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
    archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
    archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
    archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
    archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
    archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
    archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
    archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
    archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
    archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
    archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
    archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
    archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
    archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
    archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
    archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
    archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
    archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
    archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
    archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
    archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
    archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
    archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
    archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
    archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
    archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
    archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
    released channel: prm1
    released channel: stby1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
    RMAN-05501: aborting duplication of target database
    RMAN-03015: error occurred in stored script Memory Script
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
    ORA-00283: recovery session canceled due to errors
    ORA-00354: corrupt redo log block header
    ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
    ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
    ################# What I did: ################################
    Rma target /
    RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
    The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
    and I run the last content of the failing memory script:RMAN> run
    set until scn 1638816629;
    recover
    standby
    clone database
    delete archivelog
    And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
    The I could configure Datagurd
    And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
    How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
    If Any ideas, that ould be great to understand this.
    Rgds
    Carlos

    okKarol wrote:
    Hi Gurus,
    I've posted few days ago an issue I got while recreating my Dataguard.
    The Main issue was while duplicating target from active database I got this errors during the recovery process.
    The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
    Yesterday I took one last try,
    I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
    The I got the same issue, same errors.
    I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
    and that's waht I did.
    But I was just connect to the target db.
    contents of Memory Script:
    set until scn 1638816629;
    recover
    standby
    clone database
    delete archivelog
    executing Memory Script
    executing command: SET until clause
    Starting recover at 14-MAY-13
    starting media recovery
    archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
    archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
    archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
    archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
    archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
    archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
    archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
    archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
    archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
    archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
    archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
    archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
    archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
    archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
    archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
    archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
    archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
    archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
    archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
    archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
    archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
    archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
    archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
    archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
    archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
    archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
    archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
    archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
    archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
    released channel: prm1
    released channel: stby1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
    RMAN-05501: aborting duplication of target database
    RMAN-03015: error occurred in stored script Memory Script
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
    ORA-00283: recovery session canceled due to errors
    ORA-00354: corrupt redo log block header
    ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
    ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
    ################# What I did: ################################
    Rma target /
    RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
    The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
    and I run the last content of the failing memory script:RMAN> run
    set until scn 1638816629;
    recover
    standby
    clone database
    delete archivelog
    And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
    The I could configure Datagurd
    And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
    How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
    If Any ideas, that ould be great to understand this.
    Rgds
    CarlosHi,
    Can you change standby database archive destination from /archives/CMOVP/stby/ other disk?
    I think this problem on your disk.
    Mahir
    p.s. I remember you before thread, too

Maybe you are looking for