Problem in Database Recovery..
I m working at Test Environment, My database is running in ArchiveLogMode. I have current backup and old backup (one month old),
For practice of Backup & Recovery, I deleted the control files, online redo logs and data files and restored from old backup(one month old).
Note: I want to let you know that I have all the ArchivedRedoLog files and I have also created one tablespace (tablespace: abamco_test)that is not available in old backup.
Is it possible to recover that tablespace with only ArchivedRedoLog files (No datafile backup)??????????
=========================================================================
SQL> select name from v$database;
NAME
ROCK
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
<><> deleted all the datafiles, redologs, and control files.
<><> copied & pasted all the redologs, and control files from one month old backup
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
ORA-00205: error in identifying control file, check alert log for more info
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
<><> COpied One month old control file and pasted in OraData folder.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
<><> Copied all the datafiles, oneline redo log files from one month Old backup.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 6: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
ORA-01110: data file 6: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
ORA-01207: file is more recent than control file - old control file
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
ORA-01207: file is more recent than control file - old control file
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
SQL> alter database backup controlfile to trace;
alter database backup controlfile to trace
ERROR at line 1:
ORA-01507: database not mounted
SQL> alter database mount;
Database altered.
SQL> alter database backup controlfile to trace;
Database altered.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
<><> copied the (contents) from generated file and saved in controlfile_recover.sql script.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ROCK" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO03.LOG' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\UNDOTBS01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSAUX01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\USERS01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
CHARACTER SET WE8MSWIN1252
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> alter tablespace ABAMCO_TEST offline;
alter tablespace ABAMCO_TEST offline
ERROR at line 1:
ORA-01109: database not open
SQL>
SQL>
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
<><> I removed one line 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF' from controlfile_recover.sql
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 12800 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> SHUTDOWN IMMEDIATE;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 16384 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 32768 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 65536 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL>
Any suggestions? what should I do now?
Here's what I found on Metalink :
Subject: ORA-1163 creating a controlfile
Doc ID: Note:377933.1 Type: PROBLEM
Last Revision Date: 24-JUL-2006 Status: REVIEWED
Problem Description:
====================
You are attempting to recreate your controlfile after a clean shutdown ( shutdown normal or immediate)
Upon running the create controlfile script you receive:
CREATE CONTROLFILE REUSE DATABASE "PRODAB" NORESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 12800 (blocks), but should match header 240160
ORA-01110: data file X: '<full path of datafile>'
Problem Explanation:
====================
Sample controlfile.sql
CREATE CONTROLFILE REUSE DATABASE "PRODAB" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 454
LOGFILE
GROUP 1 '/oradata/PROD/redo01.log' SIZE 10M,
GROUP 2 '/oradata/PROD/redo02.log' SIZE 10M,
GROUP 3 '/oradata/PROD/redo03.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/oradata/PROD/system01.dbf',
'/oradata/PROD/undotbs01.dbf',
'/oradata/PROD/sysaux01.dbf',
'/oradata/PROD/users01.dbf', <----------------Notice the extra comma after thelast datafile.
CHARACTER SET WE8ISO8859P1
Search Words:
=============
create controlfile ORA-1503 ORA-1163 ORA-1110
Solution Description:
=====================
This extra comma is causing the create controlfile to raise this unexpected error as seen above.
Solution Explanation:
=====================
This is a syntax error and by removing the trailing comma the create control should bypass this error.
Wow ! Oracle made a syntax error ! why am i not surprised ? :)
Thanks Khurram for your help !
Similar Messages
-
Problem in database recovery thru RMAN
I am completely new to RMAN.
I took full backup thru RMAN and as per suggested by some document, I deleted controlfile, datafile and SPfile in order to learn recovery thru RMAN.
now, when I try to connect to RMAN catalog I'm getting the following error "
RMAN> connect catalog rman/rman@test9i
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-01033: ORACLE initializat
on or shutdown in progress"
can anybody help me in connecting and recovering the database.I thinnk there may be user connected while performing a shutdown.Disconnect the user and stop the listner then you perform the recovery
correct me if i am wrong -
Problem in performing multiple Point-In-Time Database Recovery using RMAN
Hello Experts,
I am getting an error while performing database point in time recovery multiple times using RMAN. Details are as follows :-
Environment:
Oracle 11g, ASM,
Database DiskGroups : DG_DATA (Data files), DG_ARCH(Archive logs), DG_REDO(Redo logs Control file).
Snapshot DiskGroups :
Snapshot1 (taken at 9 am): SNAP1_DATA, SNAP1_ARCH, +SNAP1_REDO
Snapshot2 (taken at 10 am): SNAP2_DATA, SNAP2_ARCH, +SNAP2_REDO
Steps performed for point in time recovery:
1. Restore control file from snapshot 2.
RMAN> RESTORE CONTROLFILE from '+SNAP2_REDO/orcl/CONTROLFILE/Current.256.777398261';
2. For 2nd recovery, reset incarnation of database to snapshot 2 incarnation (Say 2).
3. Catalog data files from snapshot 1.
4. Catalog archive logs from snapshot 2.
5. Perform point in time recovery till given time.
STARTUP MOUNT;
RUN {
SQL "ALTER SESSION SET NLS_DATE_FORMAT = ''dd-mon-yyyy hh24:mi:ss''";
SET UNTIL TIME "06-mar-2013 09:30:00";
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
Results:
Recovery 1: At 10.30 am, I performed first point in time recovery till 9:30 am, it was successful. Database incarnation was raised from *2* to *3*.
Recovery 2: At 11:10 am, I performed another point in time recovery till 9:45 am, while doing it I reset the incarnation of DB to *2*, it failed with following error :-
Starting recover at 28-FEB-13
using channel ORA_DISK_1
starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/06/2013 11:10:57
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
start until time 'MAR 06 2013 09:45:00'
ORA-00283: recovery session canceled due to errors
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '+DG_REDO/orcl/onlinelog/group_1.257.807150859'
ORA-17503: ksfdopn:2 Failed to open file +DG_REDO/orcl/onlinelog/group_1.257.807150859
ORA-15012: ASM file '+DG_REDO/orcl/onlinelog/group_1.257.807150859' does not exist
Doubts:
1. Why did recovery failed 2nd time, but not 1st time and why is RMAN looking for online redo log group_1.257.807150859 in 2nd recovery ?
3. I tried restoring control file from AutoBackup, in that case both 1st and 2nd recovery succeeded.
However for this to work, I always need to keep the AutoBackup feature enabled.
How reliable is control file AutoBackup ? Is there any alternative to using AutoBackup, can I restore control file from snapshot backup only ?
4. If I restore control file from AutoBackup, then from what point of time/SCN does RMAN restores the control file ?
Please help me out in this issue.
Thanks.992748 wrote:
Hello experts,
I'm little newbie to RMAN recovery. Please help me in these doubts:
1. If I have datafiles, archive logs & control files backup, but current online REDO logs are lost, then can I perform incomplete database recovery ?yes, if you have backups of everything else
2. Till what maximum time/scn can incomplete database recovery be performed ??Assuming the only thing lost is the redo logs, you can recover to the last scn in the last archivelog.
3. What is role of online REDO logs in incomplete database recovery ? They provide the final redo changes - the ones that have not been written to archivelogs
Are they required for incomplete recovery ?It depends on how much incomplete recovery you need to do.
Think of all of your changes as a constant stream of redo information. As a redolog fills, it is copied to archive, then (eventually) reused. over time, your redo stream is in archivelog_1, continuing into archvivelog_2, then to 3, and eventually, when you get to the last archivelog, into the online redo. A recovery will start with the oldest necessary point in the redo stream and continue forward. Whether or not you need the online redo for a PIT recovery depends on how far forward you need to recover.
But you should take every precaution to prevent loss of online redo logs .. starting with having multiple members in each redo group ... and keeping those multiple members on physically separate disks. -
Problem in Backup/Recovery 10g
Hi
I am Trying to Take Backup in Oracle 10g ........
RMAN > Configure retention policy to recovery window of 4 days
RMAN > configure controlfile autobackup on;
RMAN > Configure device type disk backup type to compressed backupset;
RMAN> backup as backupset database spfile;
Starting backup at 14-MAR-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/14/2007 10:20:
49
ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 14-MAR-07
channel ORA_DISK_1: finished piece 1 at 14-MAR-07
piece handle=C:\ORACLE\PRODUCT\10.1.0\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2007_03
_14\O1_MF_NCSNF_TAG20070314T102048_2ZGY13BK_.BKP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 14-MAR-07
channel ORA_DISK_1: finished piece 1 at 14-MAR-07
piece handle=C:\ORACLE\PRODUCT\10.1.0\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2007_03
_14\O1_MF_NNSNF_TAG20070314T102048_2ZGY15PK_.BKP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/14/2007 10:20:
49
I can identefy the Problem .Your database is running in NOARCHIVELOG mode. You cannot make an online backup.
ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode -
Failure during database recovery on Homogeneous System Copy
Dear all,
i am trying to do system copy, and it fails after the execution step: database recovery
MaxDB: 7.6.5.15
SAP Netweaver 7 Ehp 1
apparantly this is something to do with LOAD_SYSTAB.
I could run load_systab [-u <sysdba_user>,<sysdba_user_password>] manually, but the Log file of SAPinst shows the following:
WARNING[E] 2009-09-28 17:17:57.328
CJSlibModule::writeError_impl()
The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
TRACE 2009-09-28 17:17:57.546 [iaxxejsbas.hpp:408]
handleException<ESAPinstJSError>()
Converting exception into JS Exception EJSException.
TRACE 2009-09-28 17:17:57.562
Function setMessageIdOfExceptionMessage: dbmodada.actorext.dbmcliCallFailed
WARNING[E] 2009-09-28 17:17:57.562
CJSlibModule::writeError_impl()
The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
TRACE 2009-09-28 17:17:57.562 [iaxxejsbas.hpp:483]
EJS_Base::dispatchFunctionCall()
JS Callback has thrown unknown exception. Rethrowing.
ERROR 2009-09-28 17:17:57.781 [sixxcstepexecute.cpp:950]
FCO-00011 The step sdb_instance_load_systables with step key |NW_ABAP_OneHost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CreateDBandLoad|ind|ind|ind|ind|10|0|NW_CreateDB|ind|ind|ind|ind|0|0|NW_ADA_DB|ind|ind|ind|ind|6|0|SdbPreInstanceDialogs|ind|ind|ind|ind|4|0|SdbInstanceDialogs|ind|ind|ind|ind|1|0|SDB_INSTANCE_CREATE|ind|ind|ind|ind|0|0|sdb_instance_load_systables was executed with status ERROR .
TRACE 2009-09-28 17:17:58.93 [iaxxgenimp.cpp:752]
CGuiEngineImp::showMessageBox
<html> <head> </head> <body> <p> An error occurred while processing option SAP NetWeaver 7.0 including Enhancement Package 1 Support Release 1 > Software Life-Cycle Options > System Copy > MaxDB > Target System Installation > Central System > Based on AS ABAP > Central System. You can now: </p> <ul> <li> Choose <i>Retry</i> to repeat the current step. </li> <li> Choose <i>View Log</i> to get more information about the error. </li> <li> Stop the option and continue with it later. </li> </ul> <p> Log files are written to C:\Program Files/sapinst_instdir/NW701/LM/COPY/ADA/SYSTEM/CENTRAL/AS-ABAP/. </p> </body></html>
TRACE 2009-09-28 17:17:58.109 [iaxxgenimp.cpp:1255]
CGuiEngineImp::acceptAnswerForBlockingRequest
Waiting for an answer from GUI
XCMDOUT.LOG shows only the SAP users data from the source system, and not for the target system which is having the error.
Could somebody please advise me what to do?
Thank you,
MarianaDear Christian,
yes, I solved this LOAD_SYSTAB problem.
This is what I did:
1. check XCMDOUT.LOG
2. However in my case, I did not see any clue there, so I read this link about LOAD_SYSTAB http://maxdb.sap.com/doc/7_7/45/11cbd6459d7201e10000000a155369/content.htm
I tried it manually, and it worked: dbmcli u2013d <DB_ID> u2013u DBMUser,password1 load_systab u2013u superdba,password2
From there, I know that I entered the wrong SYSADM User (superdba) password, this password was in my case the same one a SAPinst Master Password.
According to https://websmp130.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=25591
a new installation of MaxDB database, by default, the credential for SYSADM is: "superdba,admin"
So, accordingly, the solution is:
change the SYSADM for the <DB_ID> in DBMGUI: D7D - Configuration - Database User area, exactly as the SAPinst Master Passwort.
Hope this helps.
Regards,
Mariana -
Does anyone have a set of database recovery scripts for various scenarios for 8i and 9i databases running on Windows 2000 and 2003?
Cheers,
Derek.>>
We do a cold backup each night and have archive on. The scenarios are any that may occur i.e., media failure, dropped tables, lost control files etc.
>>
Hey Derek, the problem with cold backup is incase of media recovery required, you can't simply resotre the datafile/s which have/has problems. You need to restore complete database.
Its difficult to have incomplete reocvery or point-in-time recovery having cold backups.
I strongly recommend you start thinking about online backups. You need to assess your business requirements, and much you can efforts to loose?
jaffar -
Interactive report performance problem over database link - Oracle Gateway
Hello all;
This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
query like this: (due to sensitivity issue, I can not disclose real table name)
SELECT apex_item.checkbox(1,b.col3)
, a.col1
, a.col2
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5
table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
I would like to get ideas from people how to resolve or speed-up the query?
Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
Thank you.
Shaun S.Hi Shaun
Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
WITH a AS
(SELECT /*+ MATERIALIZE */ *
FROM table_one),
b AS
(SELECT /*+ MATERIALIZE */ *
FROM table_two),
sourceqry AS
(SELECT b.col3 x
, a.col1 y
, a.col2 z
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5)
SELECT apex_item.checkbox(1,x), y , z
FROM sourceqry
WITH a AS
(SELECT /*+ MATERIALIZE */ *
FROM table_one),
b AS
(SELECT /*+ MATERIALIZE */ *
FROM table_two)
SELECT apex_item.checkbox(1,x), y , z
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
Sorry for all the questions but it helps to answer the question, if I can.
Cheers
Ben
http://www.munkyben.wordpress.com
Don't forget to mark replies helpful or correct ;) -
Check "Problem: Oracle Database 10g Release 2 can only be installed in new
Hi All
Installing 10g software on AIX Box in /oracle/oraHome2,where oracle9i is already installed in /oracle/oraHome1 and oracle inventory is in /oracle/inventory
In product specific prerequisite screen I am getting the
check:Oracle Home incompatibilties is failed I am getting the error
Check complete. The overall result of this check is: Passed
=======================================================================
Checking for Oracle Home incompatibilities ....
Actual Result: Oracle9i Database 9.2.0.1.0
Check complete. The overall result of this check is: Failed <<<<
Problem: Oracle Database 10g Release 2 can only be installed in a new Oracle Home
Recommendation: Choose a new Oracle Home for installing this product
though I am installing it in different oracle homes why i am getting this error?
Is it because 10g installation is picking the same oracle inventory /oracle/inventory
do i need to make different oracle inventory for different oracle homes???????plz confirm..is thats the reason i am getting this error??????
Thanks inadvance
GaganI figure out you are trying to install 10gR2 on top of an existing 9iR2 Oracle Home. This is corrected at the Path definition window. Most probably you just click on the <Next> and by default the 9iR2 Oracle Home was selected. You must define a new oracle home for the 10gR2 install.
~ Madrid -
Questions About Database Recovery (-30975)
Hello,
In Berkeley 4.5.20, we are seeing the following error sporadically, but more frequently than we'd like (which is, to say, not at all): "BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery"
This exception is being thrown mostly, if not exclusively, during the environment open call. Still investigating.
I will post my environment below, but first some questions.
1. How often should a database become become corrupt?
2. What are the causes of this corruption? Can they be caused by "chance?" (I.e. app is properly coded.) Can they be caused by improper coding? If so, is there a list of common things to check?
3. Does Oracle expect application developers to create their own recovery handlers, especially for apps that require 100% uptime? E.g. using DB_ENV->set_event_notify or filtering on DB_RUNRECOVERY.
Our environment:
Windows Server 2003 SP2
Berkeley DB 4.5.20
set_verbose(DB_VERB_WAITSFOR, 1);
set_cachesize(0, 65536 * 1024, 1);
set_lg_max(10000000);
set_lk_detect(DB_LOCK_YOUNGEST);
set_timeout(60000000, DB_SET_LOCK_TIMEOUT);
set_timeout(60000000, DB_SET_TXN_TIMEOUT);
set_tx_max(100000);
set_flags(DB_TXN_NOSYNC, 1);
set_flags(DB_LOG_AUTOREMOVE, 1);
set_lk_max_lockers(10000);
set_lk_max_locks(10000);
set_lk_max_objects(10000);
open(sPath, DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN | DB_RECOVER, 0);
set_pagesize (4096);
u_int32_t dbOpenFlags = DB_CREATE | DB_AUTO_COMMIT;
pDbPrimary->open(NULL, strFile, NULL, DB_HASH, dbOpenFlags, 0);
We also have a number of secondary databases.
One additional piece of information that might be relevant is that the databases where this happens (we have 8 in total managed by our process,) seem to be the two specific databases that at times aren't opened until well after the process is up and running due to the nature of their data. This is to say that 6 of the other databases are normally opened during startup of our service. We are still investigating this to see if this is consistently true.Here is the output from the error logs (we didn't have this properly set up until now) when this error opening the environment happens:
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley unable to join the environment
12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003.del.0547204268: Access is denied.
12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
12/17/2007 17:19:18 (e64/518) 1024: Database EInitialize failed. (C:\xxxxxxxx\Database\xxxJOB_OAT: BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery)
The last line is generated by a DbException and was all we were seeing up until now.
I also set_verbose(DB_VERB_RECOVERY, 1) and set_msgcall to the same log file. We get verbose messages on the 1st 7 database files that open successfully, but none from the last one, I assume because they output to set_errcall instead.
There is 67GB of free space on this disk by the way, so not sure what "Not enough space" means.
Thanks again for your help. -
Problem with database schema objects in the entity object wizard
Hi All,
When creating a new entity object, I am facing a problem with database schema objects in the entity object wizard, database schema objects (check boxes for tables,synonyms...) are disabled. Actually I am using a synonym but I am not able to select the synonym check box.
Can any of you folks tell me how to enable the database schema objects (check boxes for tables,synonyms...).
Thanks in Advance.
Raja.MMake sure your using rite version of jdeveloper..
Make sure your using apps schema and check whether your able perform DML operations in the schema vis sql developer.
--Prasanna -
Dear all
I am facing a problem in database
well
whenever a insert statement is fired the system is hangout
and
in server when some inactive sessions are killed the statement gets completed
and always have to do that
whenever a insert statement is fired
how to solve this problemI think you have a powerful lock problem in your database. Can you try to look some dictionary views related to locks before and during this special insert??
-
Problem connecting DataBase Link from windows oracle to oracle on Linux
I'm facing a problem with database links from windows oracle to Oracle hosted on Linux server.
I'm able to successfully create the Database Link using the following query on oracle database hosted on a windows server
CREATE DATABASE LINK SampleDB
CONNECT TO myuser IDENTIFIED BY password
USING 'sample';
The tns names entry on windows for database in Linux server is as follows
DSOFT =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.100)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = sample)
But while executing the query "select count(*) from doctor@SampleDB;" in sql developer on windows, I'm getting the following error
SQL Error: ORA-12154: TNS:could not resolve the connect identifier specified
12154. 00000 - "TNS:could not resolve the connect identifier specified"
*Cause: A connection to a database or other service was requested using
a connect identifier, and the connect identifier specified could not
be resolved into a connect descriptor using one of the naming methods
configured. For example, if the type of connect identifier used was a
net service name then the net service name could not be found in a
naming method repository, or the repository could not be
located or reached.
Using the above tns entries, i'm successfully able to connect to the database in Linux server through sql developer installed on the windows machine. Then why i'm getting this error while executing the query on Database Link?. Can any one help me?1005745 wrote:
I'm facing a problem with database links from windows oracle to Oracle hosted on Linux server.
I'm able to successfully create the Database Link using the following query on oracle database hosted on a windows server
CREATE DATABASE LINK SampleDB
CONNECT TO myuser IDENTIFIED BY password
USING 'sample';
The tns names entry on windows for database in Linux server is as follows
DSOFT =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.100)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = sample)
But while executing the query "select count(*) from doctor@SampleDB;" in sql developer on windows, I'm getting the following error
SQL Error: ORA-12154: TNS:could not resolve the connect identifier specified
12154. 00000 - "TNS:could not resolve the connect identifier specified"
*Cause: A connection to a database or other service was requested using
a connect identifier, and the connect identifier specified could not
be resolved into a connect descriptor using one of the naming methods
configured. For example, if the type of connect identifier used was a
net service name then the net service name could not be found in a
naming method repository, or the repository could not be
located or reached.
Using the above tns entries, i'm successfully able to connect to the database in Linux server through sql developer installed on the windows machine. Then why i'm getting this error while executing the query on Database Link?. Can any one help me?A database link is acting as a client to the target, remote database in exactly the same fashion and using exactly the same tns infrastructure as any other client trying to connect to that remote database. your ORA-12154 when querying a db link means exactly the same as if you had gotten it trying to connect with sqlplus, from the same server. Check the link SB provided. Keep in mind that the tnsnames file of concern is the one on the source database server. -
"Fatal error, run database recovery " when there are no txns to recover.
Hi, all.
I have a DB file containing multiple databases. Without using DBEnvironments, I can open it to get the dbnames. I can open the databases RDONLY,
and see that their contents are correct. I can open them RW, and everything works.
But when I try to create a new one, I get this:
D = bsddb3.db.DB()
D.open('test.db',dbname='test',dbtype=B.DB_BTREE,flags=B.DB_CREATE)Traceback (most recent call last):
File "<stdin>", line 1, in <module>
bsddb3.db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error, run database recovery -- PANIC: fatal region error detected; run recovery')
Note that this is in the non-transactional case. There is no Env, and there are no logfiles or __db files. So the error code mystifies me.
Strace shows that the file is opened RW, and read through.
B.DB_VERSION_STRING'Berkeley DB 4.8.24: (August 14, 2009)'
>>>
So, where to proceed? Many thanks for any and all help.Hmm. Other thing to note:
[tradedesk@vader 2010-05-06.test]$ /usr/local/BerkeleyDB.4.8/bin/db_verify foo.db
db_verify: Subdatabase entry references page 266 of invalid type 13
db_verify: Page 0: non-invalid page 40 on free list
db_verify: trading.db: DB_VERIFY_BAD: Database verification failed
Not sure how that came about or how to prevent it, but it might have to do wit this issue. -
Problem with database connectivity
Hi guys,
I'm having a problem with database connectivity . I'm using the mySQL database & org.gjt.mm.mysql driver.
I've kept the org folder under the directory where the Database.java program is residing .
My program is as follows:
import java.sql.*;
public class Database
Connection con;
public void connect()
System.out.println("In connect.");
public Connection connectDB(String username,String password,String database)
try
Class.forName("org.gjt.mm.mysql.jdbc.Driver");
String url="jdbc:mysql:"+database;
con=DriverManager.getConnection(url,username,password);
catch(SQLException e)
System.out.println("SQL Exception caught:"+e);
catch(Exception e)
System.out.println("Exception e caught:"+e);
return con;
public static void main(String args[])
try
Connection c;
Database db=new Database();
c=db.connectDB("root","","//192.168.0.2/squid");
System.out.println("Connection created.");
c.close();
catch(Exception e)
System.out.println("Exception caught:"+e);
It gets compiled but gives the following error when run.
Exception e caught:java.lang.ClassNotFoundException: org.gjt.mm.mysql.jdbc.Driver
Connection created.
Exception caught:java.lang.NullPointerException
I don't know why the class is not found.Pl . help me rectify my mistake.Is there any other path to be set?
Thanks.You need to run it with the JDBC driver in the classpath
java -classpath .;[mySqlJdbcDriver.jar] Database -
Database recovery (online redolog ?)
hi all,
Been awhile since i touch on oracle db, i have been reading around and the emphasis for recovery is always on the backup and archivelog, but i think its wrong.
can i check ->
q1) for full database recovery, do i need the online redo log as well ?
q2) if the answer to q1) is yes, how do i duplicate online redo log to standby site ? (i don't think rsync will work as it cannot sure consistency in the redolog)
will oracle dataguard sync online redolog as well ?
q3) for archivelog, beside manual rsyncing, LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'
do i need the enterprise edition for the above ?
Regards,
Alanq1) For a complete recovery, yes you need online redolog as well. Without online redolog,its still considered incomplete recovery since u lose data resides in online redolog
q2) You no need to synch online redo log manually. Once the backup is restored to the DR dataguard site and MRP process initiated, Oracle will synch online redolog/archivelog automatically based on protection mode specified
q3) Oracle dataguard applies to Enterprise Edition only. Without Enterprise Edition, we can configure log shipping (manual way).
Regards,
Ilan
Maybe you are looking for
-
Get error message when trying to print from Sheet Music Plus
I have Windows 8, Have installed Adobe Air and flashplayer, but when I try to do digital print from Sheet Music Plus I get error message: The application could not be installed because the installer file is damaged. Try obtaining a new installer file
-
Group Security Issue with Business Rules
Hopefully you experts out there can follow this. We have about 200 users in our Planning application split into 3 categories (Admins, Interactive Users and Planners) via groups setup in Shared services. We also have an email group list setup in Outlo
-
what do I convert my swf website banner file to so it will work in websites displayed in mac and windows browsers?
-
Message split 1:2 ?
Hi Guys, I have a proxy - http synchronus scenario. The target structure has 2 different messages like message 1 and message 2 and i need to generate 2 different xml messages for each. I cant go for split with out BPM as the HTTP adapter lies on the
-
Hi Experts, I would like to know the best storage configuration for redologs. My OS is RHEL 6.3 and database release is 11.2.0.4 2-node RAC. 1. Is it good and recommended practice to place the redologs on ASM? or 2. RAID 10 is the best for redologs i