RECOVERY OF THREAD 1 STUCK AT BLOCK 54 OF FILE 2

Hi all,
i have problem . my db is running on Oracle9i release 2. when i want to startup my db, it returns 'shutdown is initializing..' then i force by apply shutdown abort. then i retry to startup, then it give message as 'RECOVERY OF THREAD 1 STUCK AT BLOCK 54 OF FILE 2' . i refer to similar forum, then i apply the below commands:
SQL> recover file 2 block 54
ORA-00905: missing keyword
then i use -->
SQL> alter database recover datafile 2 blok 54;
alter database recover datafile 2 blok 54
ERROR at line 1:
ORA-00905: missing keyword
FYI, i dont do any backup because this db is only on my personal laptop. and also not running in archivelog mode. finally i just run below-->
SQL> recover database
and the result as follows:
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [8388662], [1], [135],
[7548], [16], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 54)
ORA-10564: tablespace UNDOTBS1
ORA-01110: data file 2: 'C:\ORACLE\ORADATA\CGCDEVP\UNDOTBS01.DBF'
ORA-10560: block type 'KTU UNDO BLOCK'
can anybody help me?
Thanks all
Best regards,
Nonie

ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [8388662], [1], [135],[7548], [16], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 54)You got ora-00600 error which is called by a special name called "oracle bug".
Did you try this solution
Recovery problem

Similar Messages

  • Recovery of thread 1 stuck at Block 113713 of file 643

    Hi,
    One day ago we had a Hardware problem with our Solaris Machine and 11g(11.1.0.7.0) Database.
    Our Server crashed saying..
    ORA-00376: file 643 cannot be read at this timeThe above error repeated for another file.
    We checked file status and there was some problem(I/O errror when we gave ls command) while accessing the file.
    Then we managed to restore that mount point.
    and then we started database but the database failed to open saying below error.
    RECOVERY OF THREAD 1 STUCK AT BLOCK 113713 OF FILE 643
    Thu Jul 21 16:45:41 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 42896 OF FILE 644We checked V$recover_file, No file mentioned.we checked V$datafile all files status was online and enabled was read/write.Then we manually recover the failed data files by
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf'
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf'It succeeded with media recovery complete and then we were able to open Database.
    Even if V$recover_file and V$datafile had not mentioned for above two problematic datafiles, why we had to do media recovery?
    Any ideas?
    Regards!
    Edited by: Nitin Joshi on Jul 22, 2011 10:23 AM

    Yes Hemant.
    We all were Puzzled by that scenario.
    Posting alert log stack(incomplete) when we try to open database.
    MMAN started with pid=28, OS id=1277
    Thu Jul 21 16:38:21 2011
    DBW0 started with pid=2, OS id=1279
    Thu Jul 21 16:38:21 2011
    DBW1 started with pid=3, OS id=1281
    Thu Jul 21 16:38:21 2011
    DBW2 started with pid=32, OS id=1283
    Thu Jul 21 16:38:22 2011
    DBW3 started with pid=5, OS id=1285
    Thu Jul 21 16:38:22 2011
    LGWR started with pid=7, OS id=1287
    Thu Jul 21 16:38:22 2011
    CKPT started with pid=36, OS id=1289
    Thu Jul 21 16:38:22 2011
    SMON started with pid=40, OS id=1291
    Thu Jul 21 16:38:22 2011
    RECO started with pid=44, OS id=1293
    Thu Jul 21 16:38:22 2011
    MMON started with pid=48, OS id=1295
    Thu Jul 21 16:38:22 2011
    MMNL started with pid=52, OS id=1297
    DISM started, OS id=1299
    ORACLE_BASE from environment = /data87/ora11g/app/oracle
    Thu Jul 21 16:38:27 2011
    ALTER DATABASE   MOUNT
    Setting recovery target incarnation to 1
    Successful mount of redo thread 1, with mount id 2558427203
    Database mounted in Exclusive Mode
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Thu Jul 21 16:38:31 2011
    ALTER DATABASE OPEN
    Beginning crash recovery of 1 threads
    parallel recovery started with 15 processes
    Started redo scan
    Completed redo scan
    13775 redo blocks read, 631 data blocks need recovery
    Started redo application at
    Thread 1: logseq 112616, block 157510
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Thu Jul 21 16:38:34 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 113713 OF FILE 643
    Thu Jul 21 16:38:34 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 42896 OF FILE 644
    Completed redo application of 1.07MB
    Thu Jul 21 16:38:48 2011
    Non critical error ORA-48913 caught while writing to trace file "/data87/ora11g/app/diag/rdbms/irs/irs/trace/irs_p000_1307.trc"
    Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
    Writing to the above trace file is disabled for now on...
    Thu Jul 21 16:38:48 2011
    Non critical error ORA-48913 caught while writing to trace file "/data87/ora11g/app/diag/rdbms/irs/irs/trace/irs_p011_1329.trc"
    Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
    Writing to the above trace file is disabled for now on...
    Thu Jul 21 16:39:03 2011
    .Then once we gave below command the recovery worked.below is the alert log for the recovery command
    ALTER DATABASE   MOUNT
    Setting recovery target incarnation to 1
    Successful mount of redo thread 1, with mount id 2558426450
    Database mounted in Exclusive Mode
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Thu Jul 21 17:20:18 2011
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf' 
    Media Recovery Start
    Fast Parallel Media Recovery NOT enabled
    parallel recovery started with 15 processes
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Media Recovery Complete (irs)
    Completed: ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf' 
    Thu Jul 21 17:21:00 2011
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf' 
    Media Recovery Start
    Fast Parallel Media Recovery NOT enabled
    parallel recovery started with 15 processes
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Media Recovery Complete (irs)
    Completed: ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf'  it seems it applied all the data from online redolog.Should have been automatic(?) not required to do media recovery?
    Regards!
    Edited by: Nitin Joshi on Jul 22, 2011 10:46 AM
    some typos

  • ORA-01172: recovery of thread 1 stuck at block 1340 of file 2

    Database Version: Oracle 10G rel 10.02
    OS:- Windows XP SP2.
    Scenario:- Trying to open the database .
    Error:-
    SQL> startup open;
    ORACLE instance started.
    Total System Global Area 272629760 bytes
    Fixed Size 1248476 bytes
    Variable Size 100664100 bytes
    Database Buffers 163577856 bytes
    Redo Buffers 7139328 bytes
    Database mounted.
    ORA-01172: recovery of thread 1 stuck at block 1340 of file 2
    ORA-01151: use media recovery to recover block, restore backup if needed
    Please advise.

    SQL> select name,status,enabled from v$datafile where file#=2;
    NAME
    STATUS ENABLED
    D:\ORADATA\ORCL\UNDOTBS01.DBF
    ONLINE READ WRITE

  • ORA-01172: Recovery of thread 1 stuck at...

    My db got corrupted, and I'm having the following issue when opening it:
    ORA-01172: recovery of thread 1 stuck at block 31823 of file 2
    ORA-01151: use media recovery to recover block, restore backup if neededI did the following... but am now stuck. I accidentally entered a newline when I was supposed to specify the log file (I guess?). Any help would be great. Thanks!
    SQL> recover database until cancel;
    ORA-00279: change 12390169168251 generated at 06/26/2012 09:41:48 needed for
    thread 1
    ORA-00289: suggestion :
    C:\APP\GRExxxxxxx\FLASH_RECOVERY_AREA\MYDB\ARCHIVELOG\2012_06_27\O1_MF_1_1
    27_%U_.ARC
    ORA-00280: change 12390169168251 for thread 1 is in sequence #127
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log
    'C:\APP\GRExxxxxxx\FLASH_RECOVERY_AREA\MYDB\ARCHIVELOG\2012_06_27\O1_MF_1_
    127_%U_.ARC'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-10879: error signaled in parallel recovery slave
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'C:\APP\GRExxxxxxx\ORADATA\MYDB\SYSTEM01.DBF'

    Hi,
    speicfy the logs as it requested to complete the recovery (it might ask you for online redo based on you crash/scn), if you are not left over with any archives with recovery then you cancel the recover and open the database with
    reset logs
    - Pavan Kumar N

  • Recovery is repairing media corrupt block x of file x in standby alert log

    Hi,
    oracle version:8.1.7.0.0
    os version :solaris  5.9
    we have oracle 8i primary and standby database. i am getting erorr in alert log file:
    Thu Aug 28 22:48:12 2008
    Media Recovery Log /oratranslog/arch_1_1827391.arc
    Thu Aug 28 22:50:42 2008
    Media Recovery Log /oratranslog/arch_1_1827392.arc
    bash-2.05$ tail -f alert_pindb.log
    Recovery is repairing media corrupt block 991886 of file 179
    Recovery is repairing media corrupt block 70257 of file 184
    Recovery is repairing media corrupt block 70258 of file 184
    Recovery is repairing media corrupt block 70259 of file 184
    Recovery is repairing media corrupt block 70260 of file 184
    Recovery is repairing media corrupt block 70261 of file 184
    Thu Aug 28 22:48:12 2008
    Media Recovery Log /oratranslog/arch_1_1827391.arc
    Thu Aug 28 22:50:42 2008
    Media Recovery Log /oratranslog/arch_1_1827392.arc
    Recovery is repairing media corrupt block 500027 of file 181
    Recovery is repairing media corrupt block 500028 of file 181
    Recovery is repairing media corrupt block 500029 of file 181
    Recovery is repairing media corrupt block 500030 of file 181
    Recovery is repairing media corrupt block 500031 of file 181
    Recovery is repairing media corrupt block 991837 of file 179
    Recovery is repairing media corrupt block 991838 of file 179
    how i can resolve this.
    [pre]
    Thanks
    Prakash
    Edited by: user612485 on Aug 28, 2008 10:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Dear satish kandi,
    recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
    if i run dbv utility on the files which are shown in alert log file i am getting the following results.
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
    Block Checking: DBA = 751593895, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 751593896, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 1036952
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 7342
    Total Pages Empty : 4282
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
    Block Checking: DBA = 759492966, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 759492967, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 759492968, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 585068
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 8709
    Total Pages Empty : 454799
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
    Block Checking: DBA = 771822208, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 771822209, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 771822210, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 157125
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 4203
    Total Pages Empty : 887248
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    My doubts are :
    1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
    2.in future if i activate the standby database will database is going to open without any error.
    Thanks
    Prakash
    .

  • FullOffline Backup - ORA-19566: exceeded limit of 0 corrupt blocks for file

    Dear SAP gurus,
    I am getting an error from the DBA Planning Calendar every time the job for "Full Offline backup" is run. And it is always as you can see from the log on the same file "oracle/SHD/sapdata4/sr3_16/sr3.data16".
    The oracle error is the following:
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    I found the SAP Note 969192 - RMAN Backup of SYSTEM tablespace terminates with ORA-19566
    but it does no apply because this is for the tablespace SYSTEM and not PSAPSR3.
    Please find below the log:
    BR0051I BRBACKUP 7.00 (46)
    BR0055I Start of database backup: begomwsv.ffd 2011-08-17 10.01.37
    BR0484I BRBACKUP log file: /oracle/SHD/sapbackup/begomwsv.ffd
    BR0477I Oracle pfile /oracle/SHD/102_64/dbs/initSHD.ora created from spfile /oracle/SHD/102_64/dbs/spfileSHD.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     SHD
    oracle_home                    /oracle/SHD/102_64
    oracle_profile                 /oracle/SHD/102_64/dbs/initSHD.ora
    sapdata_home                   /oracle/SHD
    sap_profile                    /oracle/SHD/102_64/dbs/initSHD.sap
    backup_mode                    FULL
    backup_type                    offline_force
    backup_dev_type                disk
    backup_root_dir                /mnt/backup/oracle/SHD
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    exec_parallel                  0
    rman_compress                  no
    system_info                    shdadm/orashd eccdev01 Linux 2.6.16.60-0.87.1-smp #1 SMP Wed May 11 11:48:12 UTC 2011 x86_64
    oracle_info                    SHD 10.2.0.4.0 8192 17654 1114483454 eccdev01 UTF8 UTF8
    sap_info                       700 SAPSR3 0002LK0003SHD0011Y01548735220015Maintenance_ORA
    make_info                      linuxx86_64 OCI_102 Jan 29 2010
    command_line                   brbackup -u / -jid FLLOF20110817100136 -c force -t offline_force -m full -p initSHD.sap
    BR0116I ARCHIVE LOG LIST before backup for database instance SHD
    Parameter                      Value
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            /oracle/SHD/oraarch/SHDarch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17651
    Next log sequence to archive   17654
    Current log sequence           17654            SCN: 1114483454
    Database block size            8192             Thread: 1
    Current system change number   1114501246       ResetId: 664011854
    BR0118I Tablespaces and data files
    BR0202I Saving /oracle/SHD/sapdata3/sr3_15/sr3.data15
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data15 ...
    #FILE..... /oracle/SHD/sapdata3/sr3_15/sr3.data15
    #SAVED.... /mnt/backup/oracle/SHD/begomwsv/sr3.data15  #1/15
    BR0280I BRBACKUP time stamp: 2011-08-17 10.28.42
    BR0063I 15 of 48 files processed - 44100.117 of 121180.346 MB done
    BR0204I Percentage done: 36.39%, estimated end time: 11:15
    BR0001I ******************________________________________
    BR0202I Saving /oracle/SHD/sapdata4/sr3_16/sr3.data16
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data16 ...
    BR0278E Command output of 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog':
    Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 17 10:28:42 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    RMAN>
    RMAN> connect target *
    connected to target database: SHD (DBID=1683093070, not open)
    using target database control file instead of recovery catalog
    RMAN> *end-of-file*
    RMAN>
    host command complete
    RMAN> 2> 3> 4> 5> 6>
    allocated channel: dsk
    channel dsk: sid=223 devtype=DISK
    executing command: SET NOCFAU
    Starting backup at 17-AUG-11
    channel dsk: starting datafile copy
    input datafile fno=00019 name=/oracle/SHD/sapdata4/sr3_16/sr3.data16
    released channel: dsk
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on dsk channel at 08/17/2011 10:30:30
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    RMAN>
    Recovery Manager complete.
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0279E Return code from 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog': 1
    BR0536E RMAN call for database instance SHD failed
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0506E Full database backup (level 0) using RMAN failed
    BR0222E Copying /oracle/SHD/sapdata4/sr3_16/sr3.data16 to/from /mnt/backup/oracle/SHD/begomwsv failed due to previous errors
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0307I Shutting down database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0308I Shutdown of database instance SHD successful
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0304I Starting and opening database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.47
    BR0305I Start and open of database instance SHD successful
    Do you guys have any idea on how to solve this issue??
    Thanks in advance, Marc

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Oracle Application Server 10g thread stuck issue.

    We are having,
    Oracle Application Server 10g [10.1.3.1 + 10.1.3.4patch] along with Oracle Http Server 2.0
    Now there is a issue of thread stuck [some application threads taking longer times] due to which the application server is unable to respond and sometimes it reinits the oc4j instance.
    => Is there any setting to increase the Thread Stuck Time in OAS.
    => Is there any setting to stop the auto reinit of oc4j instance.
    => Does the auto re init of oc4j get logged in opmn.log as well as in service.log.
    It will be really helpful if any advice on the same.
    Thanks in advance...
    ManojC

    Look in httpd.conf file anything that is like thread in there you will find a simple description of the Directives.
    Greetings.

  • ORA-19566: exceeded limit of 999 corrupt blocks for file

    Hi All,
    I am new to Oracle RMAN & RAC Administration. Looking for your support to solve the below issue.
    We have 2 disk groups - +ETDATA & +ETFLASH in our 3 node RAC environments in which RMAN is configured in node-2 to take backup. We do not have RMAN catalog and the RMAN is fetching information from control file.
    Recently, the backup failed with the error ORA-19566: exceeded limit of 999 corrupt blocks for file +ETFLASH/datafile/users.6187.802328091.
    We found that the datafiles are present in both disk groups and from the control file info, we got to know that the datafiles in +ETDATA are currently in use and +ETFLASH is having old datafiles.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name LABWRKT are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/CONTROLFILE/snapcf_LABWRKT.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/controlfile/snapcf_labwrkt.f';
    Above configuration shows that SNAPSHOT CONTROLFILE is pointing to +ETFLASH. So I changed the configuration with the SNAPSHOT CONTROLFILE points to '+ETDATA/controlfile/snapcf_labwrkt.f'. At the end of backup, SNAPSHOT file was created in +ETDATA and I was expecting it to be the copy of control file being used which has dbf located in +ETDATA. But still the backup was pointing to old datafiles in +ETFLASH. Since we dont have RMAN catalog, resync also is not possible.
    When I ran it manually, it was successfull without any error and was pointing to the exisiting datafiles.
    RMAN> backup database plus archivelog all;
    I hope the issue will get resolved if the RMAN points only to the datafiles present in +ETDATA. If I am correct, please let me know how can i make it possible? Also please explain me why the newly created snapshot file does not reflect the existing control file info?

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Thread stuck in socketConnect() while openning JDBC connection

    We are suferring a very strange behaviour in one of our servers.
    Sometimes a thread stucks for serveral minutes while obtaining a JDBC connection, this is the stack trace down to our method:
    "[STUCK] ExecuteThread: '82' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
         java.net.PlainSocketImpl.socketConnect(Native Method)
         java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
         java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
         java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
         java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
         java.net.Socket.connect(Socket.java:529)
         oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:150)
         oracle.net.nt.ConnOption.connect(ConnOption.java:130)
         oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:353)
         oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422)
         oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:686)
         oracle.net.ns.NSProtocol.connect(NSProtocol.java:246)
         oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1056)
         oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:308)
         oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:538)
         oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:228)
         oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
         oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521)
         oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:280)
         oracle.jdbc.xa.client.OracleXADataSource.getPooledConnection(OracleXADataSource.java:482)
         oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:156)
         oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:101)
         weblogic.jdbc.common.internal.XAConnectionEnvFactory.makeConnection(XAConnectionEnvFactory.java:477)
         weblogic.jdbc.common.internal.XAConnectionEnvFactory.createResource(XAConnectionEnvFactory.java:177)
         weblogic.common.resourcepool.ResourcePoolImpl.makeResources(ResourcePoolImpl.java:1249)
         weblogic.common.resourcepool.ResourcePoolImpl.makeResources(ResourcePoolImpl.java:1166)
         weblogic.common.resourcepool.ResourcePoolImpl.reserveResourceInternal(ResourcePoolImpl.java:450)
         weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:342)
         weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:419)
         weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:324)
         weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:94)
         weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:63)
         weblogic.jdbc.jta.DataSource.getXAConnectionFromPool(DataSource.java:1677)
         weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1445)
         weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:446)
         weblogic.jdbc.jta.DataSource.connect(DataSource.java:403)
         weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:364)
         org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:126)
         org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:94)
         org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162)
         org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.connectInternal(DatasourceAccessor.java:327)
         org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.connectInternal(DatabaseAccessor.java:291)
         org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.reconnect(DatasourceAccessor.java:558)
         org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.reconnect(DatabaseAccessor.java:1433)
         org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.incrementCallCount(DatasourceAccessor.java:302)
         org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:570)
         org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:526)
         org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:529)
         org.eclipse.persistence.internal.sessions.IsolatedClientSession.executeCall(IsolatedClientSession.java:133)
         org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:206)
         org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:192)
         org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectOneRow(DatasourceCallQueryMechanism.java:664)
         org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRowFromTable(ExpressionQueryMechanism.java:2582)
         org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRow(ExpressionQueryMechanism.java:2553)
         org.eclipse.persistence.queries.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:439)
         org.eclipse.persistence.queries.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:1076)
         org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:740)
         org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1036)
         org.eclipse.persistence.queries.ReadObjectQuery.execute(ReadObjectQuery.java:407)
         org.eclipse.persistence.queries.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:1122)
         org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2908)
         org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1291)
         org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1273)
         org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1233)
         org.eclipse.persistence.internal.jpa.EntityManagerImpl.executeQuery(EntityManagerImpl.java:778)
         org.eclipse.persistence.internal.jpa.EntityManagerImpl.findInternal(EntityManagerImpl.java:722)
         org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:616)
         org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:495)
         sun.reflect.GeneratedMethodAccessor182.invoke(Unknown Source)
         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         java.lang.reflect.Method.invoke(Method.java:597)
         weblogic.deployment.BasePersistenceContextProxyImpl.invoke(BasePersistenceContextProxyImpl.java:106)
         weblogic.deployment.TransactionalEntityManagerProxyImpl.invoke(TransactionalEntityManagerProxyImpl.java:77)
         weblogic.deployment.BasePersistenceContextProxyImpl.invoke(BasePersistenceContextProxyImpl.java:87)
         weblogic.deployment.TransactionalEntityManagerProxyImpl.invoke(TransactionalEntityManagerProxyImpl.java:18)
         $Proxy71.find(Unknown Source)
         com.ericsson.adm.ejb.PromoProcessorMDB.processRequest(PromoProcessorMDB.java:181)
    I checked OS tcp_syn_retries, is 5, an exception should be raised after 180 seconds, isnt?
    Even more weird is that sometimes a entityManager.find() give us null object even if the database record of the sougth persisted object exists.
    It seems that we have a network problem here but WLS / JVM is not throwing exceptions so we cant show to network administrators this problem.
    Our Weblogic is version 10.3.4 running on Redhat 5 with kernel 2.63.18-194.e15.
    JDK reports itself as "Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)".
    We have the exact replica of the faulty server (hardware and configuration) running the same application in the same domain connected to the same network switch connecting to the same Oracle instance but in that server we haven't suffer such problem.
    Edited by: user13413948 on 15-feb-2012 15:37
    Edited by: user13413948 on 15-feb-2012 15:38

    Io exception: The Network Adapter could not establish the connection..
    I'd check in tools->embedded oc4j server preferences (current workspace app / data sources) and do a refresh now.
    Next, make sure your app module config files reference the right connection by right clicking the app module and selecting configuration. lastly, make sure the project points to the right database connection by right clicking project -> properties, business components.

  • Recovery in progress may need access to file

    hi,
    during the configuration of a hot standby db using the Dataguard configuration wizard i get this error :
    Renaming datafiles for standby database...
    Error running creation process on node aquarius: Error renaming files: ORA-01156: recovery in progress may need access to files
    The first attempt ot create the stand by db failed and now I get this...
    j.

    I copied the missing archive log over to standby db, still got the same error.
    1) SQL> recover standby database;
    ORA-00279: change 2342934 generated at 8/27/2009 21:10:35 needed for thread 1
    ORA-00289: suggestion : /opt/oracle/arch/SID/1_833_682861383.arc
    ORA-00280: change 2342934 for thread 1 is in sequence #833
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    AUTO
    ORA-00317: file type 0 in header is not log file
    ORA-00334: archived log: '/opt/oracle/arch/SID/1_833_682861383.arc'
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/opt/oracle/oradata/SID/system01.dbf'
    I also tried..
    2) SQL> recover standby database using backup controlfile until cancel;
    ORA-00279: change 2342934 generated at 8/27/2009 21:10:35 needed for thread 1
    ORA-00289: suggestion : /opt/oracle/arch/SID/1_833_682861383.arc
    ORA-00280: change 2342934 for thread 1 is in sequence #833
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    AUTO
    ORA-00317: file type 0 in header is not log file
    ORA-00334: archived log: '/opt/oracle/arch/SID/1_833_682861383.arc'
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/opt/oracle/oradata/SID/system01.dbf'
    Edited by: user10427867 on Aug 28, 2009 5:56 AM

  • TS4272 erase free space stuck on creating temporary file

    I upgraded to 10.7.4 and followed instructions in http://support.apple.com/kb/TS4272
    At step 9 Disk Utility is stuck on 'creating temporary file' for the last 24 hours. I have canceled and tried more than once.

    This may be a problem with Disk Utility on your boot drive. Try rebooting to the Recovery HD and using Disk Utility on that partition (it should appear in the OS X Tools menu). You might also try running a disk verification and repair to ensure the filesystem and partition structure are intact.
    Alternatively you can also bypass the disk utility application by using the Terminal to erase free space, which can be done with the following command:
    sudo diskutil secureErase freespace 0 disk0s2
    Note that while the device "disk0s2" is commonly the boot drive for systems with a single hard drive, this may be different for you system. Check the device by opening Disk Utility and getting information on your boot volume, and you should see the proper device name listed as "device identifier."

  • ORA-27044: unable to write the header block of file / IBM AIX RISC System/6

    We are getting an error like this...
    Oracle 11.2.0.2
    AIX 5.3
    Error:
    CREATE CONTROLFILE REUSE DATABASE "CLMST" RESETLOGS NOARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-00200: control file could not be created
    ORA-00202: control file: '/clmst02/UAT/oradata/control/control01.ctl'
    ORA-27044: unable to write the header block of file
    IBM AIX RISC System/6000 Error: 89: Invalid file system control data detected
    Additional information: 3
    Additional information: -1
    Additional information: 1

    ORA-09817 IBM AIX RISC System/6000 Error: 89
    Oracle 11.2.0.2
    AIX 5.3
    Related Errors:-
    Mon Nov 05 18:33:51 2012
    ARC3 started with pid=30, OS id=954622
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    ARC2: Becoming the heartbeat ARCH
    Errors in file /clmst01/UAT/oracode/app/oracle/diag/rdbms/clmst/clmst/trace/clmst_ora_2514960.trc (incident=17053):
    ORA-00600: internal error code, arguments: [kccugg_end], [], [], [], [], [], [], [], [], [], [], []
    Errors in file /clmst01/UAT/oracode/app/oracle/diag/rdbms/clmst/clmst/trace/clmst_m000_2105586.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/clmst02/UAT/oradata/redo/redo03A.log'
    ORA-27037: unable to obtain file status
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    CREATE CONTROLFILE REUSE DATABASE "CLMST" RESETLOGS NOARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-00200: control file could not be created
    ORA-00202: control file: '/clmst02/UAT/oradata/control/control01.ctl'
    ORA-27044: unable to write the header block of file
    IBM AIX RISC System/6000 Error: 89: Invalid file system control data detected
    Additional information: 3
    Additional information: -1
    Additional information: 1
    Cause:-
    Your /clmst02/ mount point might be corrupted.
    Check:-
    fsck /clmst02
    The current volume is: /dev/fslv00
    File system is currently mounted.
    Primary superblock is valid.
    fsck: Performing read-only processing does not produce dependable results.
    *** Phase 1 - Initial inode scan
    *** Phase 2 - Process remaining directories
    *** Phase 3 - Process remaining files
    *** Phase 4 - Check inode allocation map
    File system inode map is corrupt (NOT FIXED)
    fsck: 0507-278 Cannot continue.
    File system is currently mounted.
    fsck: Performing read-only processing does not produce dependable results.
    Fix:-
    Either you can change the file location to another mount points or shutdown the database and run the fsck on /clmst02 mount point to fix the corrupted one & start the database.
    Fix:-
    Recreate the affected Redolog files
    alter database add logfile group 3 (
    '/clmst02/UAT/oradata/redo/redo03A.log',
    '/clmst03/UAT/oradata2/redo/redo03B.log',
    '/clmst04/UAT/oradata3/redo/redo03C.log') size 50m reuse;

  • Replica stuck while closing db file which has not been completely sync

    Hi,
    My system has one master and one client running on two different node.
    The replication server is up and running, then I start the replica, but somehow the replica is not
    getting DB_EVENT_REP_STARTUPDONE event within 120 seconds, so the database is closed,
    but while closing the database it got stuck inside a loop that logs the following messages every minute
    "the DB_ENV handle waiting %d minutes for replication lockout to complete" forever.
    (1) In what "failure" scenario, will get the thread stuck inside this loop?
    (2) How to get out of this loop?
    Thanks.

    Hi Sandra,
    Sorry to hear you're having this problem.
    When a replica starts up, it communicates with the master and tries to
    decide if it can do a simple synchronization (if it already has enough
    log history in common with the master), or must instead do a complete
    re-initialization of the database environment ("internal init").
    In the latter case we (understandably) can't satisfy DB->get()
    operations until the internal init is complete. Rather than rejecting
    those operations, we block ("locking out" the API). The idea behind
    that design choice is that it relieves the application of having to
    deal with this peculiar error situation. Unfortunately the API
    lockout is crude, i.e., it affects almost all API methods, including
    DB_ENV->close().
    There is currently no supported, non-hacky way to break out of this
    situation prematurely; one can only wait for all the necessary
    messages to arrive from the master, and eventually complete internal
    init.
    We should probably investigate why the internal init seems to be
    getting stuck. (Presumably there is not so much data that it's simply
    taking more than 120 seconds to load, or you wouldn't be asking.) If
    you turn on verbose diagnostic output at the master, and repeat this
    experiment, by any chance do you see any messages saying "queue limit
    exceeded"?
    Alan Bram
    Oracle

  • PXE boot stuck at "downloading config file cmds\z_maint.cmd

    Hi.
    Since I applied NW6.5sp8 to our ZDM7SP1_HP2 server (January Driver Updates were in
    place before), the PXE boot gets stuck at
    downloading config file cmds\z_maint.cmd
    I had this very same issue before, there I missed to drop the correct version of
    sys:\tftp\boot\settings.txt, after doing so, PXE boot was working fine again.
    I doublechecked, that the right copies of the files initrd, linux and root are to be
    found in sys:tftp and ./boot.
    I *CAN* sucessfully download the z_maint.cmd file through
    tftp -i 10.27.1.8 get cmds/z_maint.cmd
    There is a very similar thread:
    news://forums.novell.com/dlee.3shxji...ums.novell.com
    The server in that thread is running on W2k3, not NW, as we use. So I suspect, that
    it might be a simple "wrong files issue", even though I really checked initrd, ...
    several times.
    What's my mistake?
    Regards, Rudi.

    Hi.
    I just want to add this information:
    *ALL* the different client PCs we run get stuck at the very same point of th PXE
    boot process.
    downloading config file cmds\z_maint.cmd
    the cursor is blinking next line and doesn't take any keystrokes.
    Regards, Rudi.

  • My tv show and movies are getting stuck in the processing file stage

    I purchased some TV show episodes and movies on itunes. They are stuck in the processing file stage, and wont let me pause them and restart. When I close itunes it wont open again. I have to restart my computer. Please help, they have been stuck for over a week now.

    please contact me on [email protected]

Maybe you are looking for