Old corrupted blocks in nologging mode.

Dear All,
I m facing issue of block corruption, when I check through select from v$database_block_corruption it shows total 511 blocks corrupted and below is one result. This are old one and not having backup for the same.
But when I validate the backup it shows no error. How to resolve this.....
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
11 3638851 6 5.4254E+12 NOLOGGING

the RMAN backup validate to check block corruption will return if any ACTIVE DB blocks are corrupted or not. So to my knowledge the corrupted blocks u have are not being used by the DB for any read/write hence RMAN backup is not returning an information about the block corruption. The DBverify is an old version if RMAN did not find the corruption then DBVerify will also not return anything. So if RMAN des nt return any block corruption then you will not have any DB outage.

Similar Messages

  • ORA-19566: exceeded limit of 999 corrupt blocks for file

    Hi All,
    I am new to Oracle RMAN & RAC Administration. Looking for your support to solve the below issue.
    We have 2 disk groups - +ETDATA & +ETFLASH in our 3 node RAC environments in which RMAN is configured in node-2 to take backup. We do not have RMAN catalog and the RMAN is fetching information from control file.
    Recently, the backup failed with the error ORA-19566: exceeded limit of 999 corrupt blocks for file +ETFLASH/datafile/users.6187.802328091.
    We found that the datafiles are present in both disk groups and from the control file info, we got to know that the datafiles in +ETDATA are currently in use and +ETFLASH is having old datafiles.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name LABWRKT are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/CONTROLFILE/snapcf_LABWRKT.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/controlfile/snapcf_labwrkt.f';
    Above configuration shows that SNAPSHOT CONTROLFILE is pointing to +ETFLASH. So I changed the configuration with the SNAPSHOT CONTROLFILE points to '+ETDATA/controlfile/snapcf_labwrkt.f'. At the end of backup, SNAPSHOT file was created in +ETDATA and I was expecting it to be the copy of control file being used which has dbf located in +ETDATA. But still the backup was pointing to old datafiles in +ETFLASH. Since we dont have RMAN catalog, resync also is not possible.
    When I ran it manually, it was successfull without any error and was pointing to the exisiting datafiles.
    RMAN> backup database plus archivelog all;
    I hope the issue will get resolved if the RMAN points only to the datafiles present in +ETDATA. If I am correct, please let me know how can i make it possible? Also please explain me why the newly created snapshot file does not reflect the existing control file info?

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Corrupt block detected in control file

    Hi All,
    I have a scenario where I have set up Active/Standby RACs and successfully have archive redo logs being applied to Standby - everything was ok
    Versions - Oracle 11g R2 , of RHEL 5
    Scenario 1:
    Redo log application on Standby works perfectly when I do not create our software application tables using sql scripts on the Primary until AFTER the steps for Dataguard/RAC is completed successfully.
    Scenario 2:
    Redo log application does not work when I do run our sql scripts BEFORE I take a RMAN backup of the Primary to duplicate in the Standby
    Everything comes up on the Standby after the rman duplicate, archive logs get transferred , but now they do not get applied.
    I see the ORA-00227: corrupt block detected in control file: (block 1, # blocks 1) in the alert log when I put standby in Recovery Mode
    My theory is that somehow our sql scripts are breaking my rman backups when I run them before creating an RMAN backup of Primary to load on Standby- I just need someone to advise whether this is a possibility from their experience, if so I will contact Oracle support to investigate further. This is my first time working on RAC DG etc
    Thanks

    Hi All ,
    Ive tried to upgrade Oracle to 11.2.0.2 to fix this issue - that I can no longer remember !
    Managed to complete upgrade on the standby node (after having to reinstall due to hostname change )
    Now trying the Active node, I see the following error during the grid upgrade where i execute rootupgrade.sh
    Now product-specific root actions will be performed.
    Using configuration parameter file: /opt/app/11.2.0/grid2/crs/install/crsconfig_params
    Creating trace directory
    Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
    The fixes for bug 9413827 are not present in the 11.2.0.1 crs home
    Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh
    /opt/app/11.2.0/grid2/perl/bin/perl -I/opt/app/11.2.0/grid2/perl/lib -I/opt/app/11.2.0/grid2/crs/install /opt/app/11.2.0/grid2/crs/install/rootcrs.pl execution failed
    I have to download this patch from MOS for bug 9413827, somehow apply it to the old version of grid 11.2.0.1 and then rootupgrade.sh

  • FullOffline Backup - ORA-19566: exceeded limit of 0 corrupt blocks for file

    Dear SAP gurus,
    I am getting an error from the DBA Planning Calendar every time the job for "Full Offline backup" is run. And it is always as you can see from the log on the same file "oracle/SHD/sapdata4/sr3_16/sr3.data16".
    The oracle error is the following:
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    I found the SAP Note 969192 - RMAN Backup of SYSTEM tablespace terminates with ORA-19566
    but it does no apply because this is for the tablespace SYSTEM and not PSAPSR3.
    Please find below the log:
    BR0051I BRBACKUP 7.00 (46)
    BR0055I Start of database backup: begomwsv.ffd 2011-08-17 10.01.37
    BR0484I BRBACKUP log file: /oracle/SHD/sapbackup/begomwsv.ffd
    BR0477I Oracle pfile /oracle/SHD/102_64/dbs/initSHD.ora created from spfile /oracle/SHD/102_64/dbs/spfileSHD.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     SHD
    oracle_home                    /oracle/SHD/102_64
    oracle_profile                 /oracle/SHD/102_64/dbs/initSHD.ora
    sapdata_home                   /oracle/SHD
    sap_profile                    /oracle/SHD/102_64/dbs/initSHD.sap
    backup_mode                    FULL
    backup_type                    offline_force
    backup_dev_type                disk
    backup_root_dir                /mnt/backup/oracle/SHD
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    exec_parallel                  0
    rman_compress                  no
    system_info                    shdadm/orashd eccdev01 Linux 2.6.16.60-0.87.1-smp #1 SMP Wed May 11 11:48:12 UTC 2011 x86_64
    oracle_info                    SHD 10.2.0.4.0 8192 17654 1114483454 eccdev01 UTF8 UTF8
    sap_info                       700 SAPSR3 0002LK0003SHD0011Y01548735220015Maintenance_ORA
    make_info                      linuxx86_64 OCI_102 Jan 29 2010
    command_line                   brbackup -u / -jid FLLOF20110817100136 -c force -t offline_force -m full -p initSHD.sap
    BR0116I ARCHIVE LOG LIST before backup for database instance SHD
    Parameter                      Value
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            /oracle/SHD/oraarch/SHDarch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17651
    Next log sequence to archive   17654
    Current log sequence           17654            SCN: 1114483454
    Database block size            8192             Thread: 1
    Current system change number   1114501246       ResetId: 664011854
    BR0118I Tablespaces and data files
    BR0202I Saving /oracle/SHD/sapdata3/sr3_15/sr3.data15
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data15 ...
    #FILE..... /oracle/SHD/sapdata3/sr3_15/sr3.data15
    #SAVED.... /mnt/backup/oracle/SHD/begomwsv/sr3.data15  #1/15
    BR0280I BRBACKUP time stamp: 2011-08-17 10.28.42
    BR0063I 15 of 48 files processed - 44100.117 of 121180.346 MB done
    BR0204I Percentage done: 36.39%, estimated end time: 11:15
    BR0001I ******************________________________________
    BR0202I Saving /oracle/SHD/sapdata4/sr3_16/sr3.data16
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data16 ...
    BR0278E Command output of 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog':
    Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 17 10:28:42 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    RMAN>
    RMAN> connect target *
    connected to target database: SHD (DBID=1683093070, not open)
    using target database control file instead of recovery catalog
    RMAN> *end-of-file*
    RMAN>
    host command complete
    RMAN> 2> 3> 4> 5> 6>
    allocated channel: dsk
    channel dsk: sid=223 devtype=DISK
    executing command: SET NOCFAU
    Starting backup at 17-AUG-11
    channel dsk: starting datafile copy
    input datafile fno=00019 name=/oracle/SHD/sapdata4/sr3_16/sr3.data16
    released channel: dsk
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on dsk channel at 08/17/2011 10:30:30
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    RMAN>
    Recovery Manager complete.
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0279E Return code from 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog': 1
    BR0536E RMAN call for database instance SHD failed
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0506E Full database backup (level 0) using RMAN failed
    BR0222E Copying /oracle/SHD/sapdata4/sr3_16/sr3.data16 to/from /mnt/backup/oracle/SHD/begomwsv failed due to previous errors
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0307I Shutting down database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0308I Shutdown of database instance SHD successful
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0304I Starting and opening database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.47
    BR0305I Start and open of database instance SHD successful
    Do you guys have any idea on how to solve this issue??
    Thanks in advance, Marc

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Logically corrupted blocks in standby

    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanks

    Sivok wrote:
    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanksyour primary should run in force logging mode (ALTER DATABASE FORCE LOGGING) then the object level setting is ignored for direct path operations. You can apply an incremental backup to the standby to catchup (or just recreate the standby which might be as quick depending on volumes).
    Niall Litchfield
    http://www.orawin.info/

  • Oracle Corrupted Blocks .

    Hi ,
        We using R/3 4.7 with Oracle 9.2 , Now i want to check any corrupted blocks are attected in our server , Can you tell me how can i check . Because i want to check the corrupted blocks in database is in online . Shall i use DBVERIFY .
    Regards
    Palani Selvan

    Palani,
    Yes You can use DBVERIFY.  You can run it from DB13-->Verify Database
    Check below note for details
    Note 23345 - Consistency check of ORACLE database
    SAP Note 354293 - DBVerify reports corrupt block in freespace area
    Note 365481 - Block corruptions
    Note 923919 - Advanced Oracle block checking features
    Just for your information, in BW system If you get warning like "DBV-00200: Block, dba 97446680, already marked corrupted", it occures because secondary indexes of InfoCube fact
    tables are set up in mode NOLOGGING as a standard.
    See below notes for more information                                                      
    Note 442763 - Avoid NOLOGGING during the index structure (Oracle
    Note 547464 - Nologging Option when creating indexes
    Note 849485 - Reconstruction of the NOLOGGING indexes after recovery
    Re: SAP_INFOCUBE_INDEXES_REPAIR
    Hope it help
    Thanks,
    Sushil

  • Corrupt block methods

    Hi
    1-)
    If the database in noarchivelog mode I cannot issue:
    Rman>
    BACKUP VALIDATE DATABASE;
    Since this command just checks if there is any corrupted blocks, and doesnt take backup, why it wants archivelog mode ?
    2-) If I want to search corrupted blocks is there any diffrence among using
    RMAN> BACKUP VALIDATE DATABASE;
    or
    dverify
    or
    analyze ... validate structure ;
    P.S:noarchivelog mode.

    I read the link
    The thing is I dont want to specify one by one all datafiles.
    I want to scan all datafiles automatically? What if I have more than 700 datafiles?
    Is that possible with dbv?

  • Need to find the corrupted blocks.

    Hi,
    I am having blocks corruption(nologging) for the 11g database. Want to find the whether corrupted blocks are from indexes or from specific tables...

    Thanx..
    I am firing below query.
    SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
    , greatest(e.block_id, c.block#) corr_start_block#
    , least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
    , least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
    - greatest(e.block_id, c.block#) + 1 blocks_corrupted
    , null description
    FROM dba_extents e, v$database_block_corruption c
    WHERE e.file_id = c.file#
    AND e.block_id <= c.block# + c.blocks - 1
    AND e.block_id + e.blocks - 1 >= c.block#
    UNION
    SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#
    , header_block corr_start_block#
    , header_block corr_end_block#
    , 1 blocks_corrupted
    , 'Segment Header' description
    FROM dba_segments s, v$database_block_corruption c
    WHERE s.header_file = c.file#
    AND s.header_block between c.block# and c.block# + c.blocks - 1
    UNION
    SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
    , greatest(f.block_id, c.block#) corr_start_block#
    , least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
    , least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
    - greatest(f.block_id, c.block#) + 1 blocks_corrupted
    , 'Free Block' description
    FROM dba_free_space f, v$database_block_corruption c
    WHERE f.file_id = c.file#
    AND f.block_id <= c.block# + c.blocks - 1
    AND f.block_id + f.blocks - 1 >= c.block#
    order by file#, corr_start_block#;

  • HT3743 Unlocked 3G blocked in recovery mode

    I have unlocked my old 3G phone and have used a Tmobile sim card successfully for over a year, but now the phone won't work and is blocked in recovery mode after I responded to an Apple update messages.  Suggestions?

    unlocked how?  If it was not officially unlocked via the carrier the update has most likely locked it back to the original carrier.

  • Corrupt blocks ???

    During backup it appeared this error, what it can be wrong?
    RMAN> RUN{
    2> SHUTDOWN IMMEDIATE
    3> STARTUP MOUNT
    4> ALLOCATE CHANNEL disk1 TYPE DISK FORMAT 'D:\ORACLE - BKP\DF_%d_%s_%t_%U.DBF';
    5> CONFIGURE CONTROLFILE AUTOBACKUP ON;
    6> BACKUP DATABASE PLUS ARCHIVELOG TAG 'BACKUP FULL';
    7> ALTER DATABASE OPEN;
    8> }
    To use the archive of control of the data base of destination in time of the catsslogo of recovery
    Database closed
    Database dismounted
    Closing of Oracle instance
    hardwired to the data base of destination (not initiated)
    initiated Oracle instance
    Database mounted
    Total Global Area of the System 135338868 bytes
    Fixed Size 453492 bytes
    Variable Size 109051904 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 667648 bytes
    Channel Allocate: disk1
    Channel disk1: sid=9 devtype=DISK
    parameters of old configuration RMAN :
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new parameters of configuration RMAN :
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    The new parameters of configuration RMAN had been stored successfully
    Initiating backup in 04/04/06
    Specification backup in 04/04/06 does not correspond to none log of filling in the
    Finished catalogue of recovery
    Initiating backup in 04/04/06
    channel disk1: Initiating joint of backups of the archive of data full
    channel disk1: Specifying files of data in the set of backups
    fno=00001 name=C:\ORACLE\ORADATA\ESTUDO\SYSTEM01.DBF of the archive of entrance data
    fno=00002 name=C:\ORACLE\ORADATA\ESTUDO\UNDOTBS01.DBF of the archive of entrance data
    fno=00007 name=C:\ORACLE\ORADATA\ESTUDO\XDB01.DBF of the archive of entrance data
    fno=00004 name=C:\ORACLE\ORADATA\ESTUDO\INDX01.DBF of the archive of entrance data
    fno=00006 name=C:\ORACLE\ORADATA\ESTUDO\USERS01.DBF of the archive of entrance data
    fno=00003 name=C:\ORACLE\ORADATA\ESTUDO\DRSYS01.DBF of the archive of entrance data
    fno=00005 name=C:\ORACLE\ORADATA\ESTUDO\TOOLS01.DBF of the archive of entrance data
    channel disk1: initiating component 1 in 04/04/06
    channel set free: disk1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 04/04/2006 13:32:36
    ORA-19566: exceeded the limit of corrupt blocks 0 for archive C:\ORACLE\ORADATA\ESTUDO\INDX01.DBF

    Alex , This error reports that you have block corruption in your database , and rman can't backup the corrupt blocks , You can use the rman for backup with the below workaround , but should resolve this corruption, as the file name suggests that this is an index file , so you should be able to get rid of that corrupt block easily.
    To tell RMAN to permit corrupt blocks to be backed up, you must use the
    SET MAXCORRUPT command (note that this goes inside the RUN command):
    SET MAXCORRUPT FOR DATAFILE 'C:\ORACLE\ORADATA\ESTUDO\INDX01.DBF' TO n;
    where 'n' is the number of corrupt blocks which will be allowed in the backup.

  • Alter index ... rebuild was failing - getting corrupt blocks

    Database: 9.2.0.5
    OS: HPUX-ia64
    Hello,
    i rebuilded a index, announced from UpdatStats in DB13, and get a ORA-01114: IO error writing block to file 2045 (block # 319154) and
    ORA-27072: skgfdisp: I/O error in my sqlplus session.
    -> i do a alter index SAPR3."xxx~0" rebuild online parallel 4 nologging;
    -> System was up and running
    After that, the DBVerify marked some blocks corrupt, here one example:
    BR0398E DBVERIFY detected corrupted blocks in /oracle/XXX/sapdata22/btabi_95/btabi.data95
    We checked all corrupted blocks - all in free space.
    So we fixed that with creating a table with next extend size from the corrupted blocksize.
    I think we had not enough space in the tablespace where the index is, okay ... and what we also found, the PSAPTEMP datafiles was created as SPARSE files ...!!
    -> PSAPTEMP-Datafile: 10GB in the System - 2,5GB maximum on OS-Level, no more space.
    But my question is why i am getting corrupt blocks when a "alter index ... rebuild ..." is failing ?!?!
    Thank you for your support.
    Regards,
    Markus
    Edited by: Markus Dinkel on Oct 9, 2008 12:56 PM
    Edited by: Markus Dinkel on Oct 9, 2008 12:56 PM
    Edited by: Markus Dinkel on Oct 9, 2008 12:59 PM

    Thany you for the answer.
    I cant find any "cor" entries in the last DBVeriy log.
    I think (hope) we dont' have anymore corrupted blocks in the system.
    We get a response from SAP to update to 9.2.0.8.
    My customer wanna do a update to oracle 10 in march/april 2009, so he send me the question if we need a update to 9.2.0.8 ASAP or can we wait for the update to oralce10.
    But the important question from my customer and me is, why we get corrupt blocks after failing the alter index rebuild?
    Regards,
    Markus

  • HT6154 Need help!! Got a new phone, was able to download info from the old phone and set up iCloud but now i want to use the old phone as an 'iPod' just for music, when i get into iTunes it now says the old phone is in restoration mode and to restore iPho

    Do I restore the old phone? What should I do?

    it now says the old phone is in restoration mode and to restore iPhone.
    Hmmm...
    That's a doozy. I'll have to think on that for a bit.
    Quite perplexing indeed. A quandary you could say.
    Not really sure what you could try...
    Have you tried anything yet?

  • My macbook pro hard drive went bad and i bought a new one and installed it. Now i have the old corrupted hard drive in my hand and i am looking to recover my files from it, any suggestions please? too bad i never backed them up.

    My macbook pro hard drive went bad and i bought a new one and installed it. Now i have the old corrupted hard drive in my hand and i am looking to recover my files from it, any suggestions please? too bad i never backed them up.

    After you have installed the OSX on the new HDD in your MBP, install the old HDD in an enclosure and connect it to your MBP via USB.  Then try to drag and drop  your data to the new HDD. 
    If this proves unsuccessful, you may look for data recovery software on the Internet.  There will be free trails to see if it will work or not.  If the trial suggests that it will work, then you will have to purchase the software.
    The last resort is a professional data recovery service that will offer NO guarantees and charge a lot of money.
    As you now can appreciate, backups eliminate such predicaments.
    Ciao.

  • ORACLE 8.0.5 on SuSE 5.3 and 6.0 - Corrupt Block

    I do some heavy loading (Designer 2000) and I do get similar
    errors on 2 different computer with mirrored disks -different
    systems - on each one. So I'd like to exclude hardware problems.
    it's experimental - so I do not run archives -
    Designer on W95 crashes quite often but this should never lead
    to corrupted data blocks.
    Linux was hanging too - and disk-cache might not have been
    written to disk ????? could this be a reason ???
    Corrupt block relative dba: 0x01003aa8 file=4. blocknum=15016.
    Fractured block found during buffer read
    Data in bad block - type:6. format:2. rdba:0x01003aa8
    last change scn:0x0000.00014914 seq:0x3 flg:0x00
    consistancy value in tail 0x496b0605
    check value in block header: 0x0, check value not calculated
    spare1:0x0, spare2:0x0, spare2:0x0
    would be happy to get some feedback
    null

    Ferdinand Gassauer (guest) wrote:
    : I do some heavy loading (Designer 2000) and I do get similar
    : errors on 2 different computer with mirrored disks -different
    : systems - on each one. So I'd like to exclude hardware
    problems.
    : it's experimental - so I do not run archives -
    : Designer on W95 crashes quite often but this should never lead
    : to corrupted data blocks.
    : Linux was hanging too - and disk-cache might not have been
    : written to disk ????? could this be a reason ???
    : Corrupt block relative dba: 0x01003aa8 file=4. blocknum=15016.
    : Fractured block found during buffer read
    : Data in bad block - type:6. format:2. rdba:0x01003aa8
    : last change scn:0x0000.00014914 seq:0x3 flg:0x00
    : consistancy value in tail 0x496b0605
    : check value in block header: 0x0, check value not calculated
    : spare1:0x0, spare2:0x0, spare2:0x0
    : would be happy to get some feedback
    Please check first /var/log/messages for any linux errors. It is
    likely that if linux crashes and cannot sync to disk that there
    might be some corruption problems. For this reason lts of people
    would like to see raw device support but apparently Linus is not
    willing for some reason...
    I assume some hardware relevant problems though
    Marcus
    null

  • I've gotten 60 or 70 messages that that my iCloud doesn't work and to select preferences.  The message says I will get an email to verify but I never get an email.  I've checked old email, blocked email, and have low filters on emails.  Any ideas?

    've gotten 60 or 70 messages that that my iCloud doesn't work and to select preferences.  The message says I will get an email to verify but I never get an email.  I've checked old email, blocked email, and have low filters on emails.  Any ideas?

    Hey CarolinQc,
    Thanks for the question. The symptoms you are experiencing can result from an Apple ID address that has been changed prior to signing out of all of the services. Before you change the email address associated with your Apple ID, you’ll want to sign out of all services to avoid this. As a fix, you can temporarily change your Apple ID back to the old address:
    iOS 7: If you're asked for the password to your previous Apple ID when signing out of iCloud
    http://support.apple.com/kb/TS5223
    Change your Apple ID temporarily
    If signing out and back in to iMessage or FaceTime didn't help, try these steps:
    1. Change your Apple ID to the Apple ID you used previously. You shouldn't need to verify the email address.
    2. Go to Settings > iCloud. Complete these steps only if the Find My [Device] setting is turned on:
              - Scroll down and tap Delete Account, then tap Delete to confirm.
              - Tap “Keep on My [Device]” or “Delete from My [Device].” In either case, your data remains in iCloud and will be updated on your device when you sign in to iCloud again.
              - Enter the password for your previous Apple ID.
    3. Change your Apple ID to the new email address that you want to use. You'll need to verify the email address.
    4. Return to Settings > iCloud and sign in with your new Apple ID.
    Additional Information:
    Apple ID: What to do after you change your Apple ID
    http://support.apple.com/kb/HT5796
    Thanks,
    Matt M.

Maybe you are looking for