For streams , database in Archive log mode or non archive log mode?
Hello ,
I have a basic question,
To set up oracle streams, what should be the the database mode (archive log or non archive log mode)?
Thanks in advance,
Raj
It needs to be in archive log mode..thts the place frm where it captures the necessary information....
Kapil
Similar Messages
-
How to recover from corrupt redo log file in non-archived 10g db
Hello Friends,
I don't know much about recovering databases. I have a 10.2.0.2 database with corrupt redo file and I am getting following error on startup. (db is non archived and no backup) Thanks very much for any help.
Database mounted.
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
15:46:47
ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'
====
SQL> select Group#,members,status from v$log;
GROUP# MEMBERS STATUS
1 1 CURRENT
3 1 UNUSED
2 1 INACTIVE
==
I have tried this so far but no luck
I have tried following commands but no help.
SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
Database altered.
SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery
SQL> alter database open;
alter database open
ERROR at line 1:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
15:46:47
ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'user652965 wrote:
Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving. -
Chane archive mode to non archive mode
hi
how can i change a an archived database to n on_archive databaseIssue the following commands to put a database into ARCHVELOG mode:
1) Take Back up of Present SPfile by creating pfile " pfile path " from present spfile.
2) Shutdown the database by SUTDOWN IMMEDIATE.
3) Take Cold backup of the database in repsective place .
4) Change the parameter in the pfile.
log_archive_start = FALSE
log_archive_dest_1 = 'LOCATION=F:\Backup\Archive'
log_archive_dest_state_1 = DISABLE
log_archive_format = %d_%t_%s.arc
5) And make a spfile from the modified pfile in default location .
6) Start the database with spfile.
SQL> CONNECT sys AS SYSDBA
SQL> STARTUP MOUNT ;
SQL> ALTER DATABASE NOARCHIVELOG;
SQL> ARCHIVE LOG STOP;
SQL> ALTER DATABASE OPEN;
NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode. Without it one would not be able to recover.
Also, implement an archivelog backup to prevent the archive log directory from filling-up. -
"Error while accessing porting layer for ORACLE database via getSessionId()
Hi,
My ejb3.0 Entity is created from Emp table in scott/tiger schema of an Oracle 10g database. I am guessing I made some mistake creating the datasource or uploading the driver, because when I run my application, I get a long exception stack trace. The bottom-most entry in the stack trace is:
Caused by: com.sap.sql.log.OpenSQLException: Error while accessing porting layer for ORACLE database via getSessionId().
at com.sap.sql.log.Syslog.createAndLogOpenSQLException(Syslog.java:148)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.createPooledConnection(DirectConnectionFactory.java:527)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.createDirectPooledConnection(DirectConnectionFactory.java:158)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.createDirectPooledConnection(DirectConnectionFactory.java:118)
at com.sap.sql.connect.factory.PooledConnectionFactory.createPooledConnection(PooledConnectionFactory.java:119)
at com.sap.sql.connect.factory.DriverPooledConnectionFactory.getPooledConnection(DriverPooledConnectionFactory.java:38)
at com.sap.sql.connect.datasource.DBDataSourceImpl.createPooledConnection(DBDataSourceImpl.java:685)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.matchPool(DBDataSourcePoolImpl.java:1081)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.matchPooledConnection(DBDataSourcePoolImpl.java:919)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.getConnection(DBDataSourcePoolImpl.java:67)
at com.sap.engine.core.database.impl.DatabaseDataSourceImpl.getConnection(DatabaseDataSourceImpl.java:36)
at com.sap.engine.services.dbpool.spi.ManagedConnectionFactoryImpl.createManagedConnection(ManagedConnectionFactoryImpl.java:123)
... 90 moreActually, now (after the GRANT described in my reply before) the Exception has changed to:
Caused by: com.sap.sql.log.OpenSQLException: Error while
accessing porting layer for ORACLE database via
<b>getDatabaseHost</b>().
at com.sap.sql.log.Syslog.createAndLogOpenSQLException
(Syslog.java:148)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.
createPooledConnection(DirectConnectionFactory.java:527)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.
createDirectPooledConnection(DirectConnectionFactory.java:158)
at com.sap.sql.jdbc.direct.DirectConnectionFactory.
createDirectPooledConnection(DirectConnectionFactory.java:118)
at com.sap.sql.connect.factory.PooledConnectionFactory.
createPooledConnection(PooledConnectionFactory.java:119)
at com.sap.sql.connect.factory.DriverPooledConnectionFactory.
getPooledConnection(DriverPooledConnectionFactory.java:38)
at com.sap.sql.connect.datasource.DBDataSourceImpl.
createPooledConnection(DBDataSourceImpl.java:685)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
matchPool(DBDataSourcePoolImpl.java:1081)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
matchPooledConnection(DBDataSourcePoolImpl.java:919)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
getConnection(DBDataSourcePoolImpl.java:67)
at com.sap.engine.core.database.impl.DatabaseDataSourceImpl.
getConnection(DatabaseDataSourceImpl.java:36)
at com.sap.engine.services.dbpool.spi.
ManagedConnectionFactoryImpl.createManagedConnection(ManagedConnectionFactoryImpl.java:123)
... 90 more -
Recovery in archive mode, but no archived logs
Hi,
I hope someone can help me with the following question.
I have a 9.2 database in archive mode. Suppose on t=t0 I make an online backup. At the end of this run, I will do an "alter system switch logfile" statement to capture the latest transactions (more or less also at t=t0). Later, at t=t1 we do some transaction and more archived redologs are created. At t-t2 I need to restore the complete database, but I have LOST all the archive files as from t=t0. Now the question is, is it still possible to recover (even it means to go back to t=t0)? I have tried it, but always the system suggests to apply the (missing) archived redo's, and it seems I cannot escape this. But, Is it possible to get back to t=t0?
Thanks a lot, for any clue or pointer !If you take a hot backup and you lose the redo logs, you have a fundamentally inconsistent backup. Each tablespace will be internally consistent, but they probably won't have the same SCN as the control files, so you'll get this message.
Do you have an earlier hot backup and the archived logs that would restore that backup to the point in time (t=0) where you took the latest hot backup? Personally, I generally like to keep at least 2 or 3 old backups, with archived log files, just in case something goes wrong with the most recent backup.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Db restore non archive mode lost redo log file..restore from controlfile tr
i have a db 11g I had taken non archive backup but failed to take redo log files backup...
so while i restored the db ... after formatting the machine ..the oracle instance wont start.
I create a controlfile trace but when i run it i get errors.
since i dont have the older log files.. how do i get around with this issue
Thanks
Following is the sample of control file trace ..Note i cannot create the redo log file
since db wont be mounted at most it shall be in nonmount mode
and below is my created controlfile ....
CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1
'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
SIZE 50M BLOCKSIZE 512,
GROUP 2
'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF',
'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF',
'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF',
'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF'
CHARACTER SET AL32UTF8
I dont have these 2 files ..what do i do to get around this situation
'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
SIZE 50M BLOCKSIZE 512,
GROUP 2
'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
Edited by: zycoz100 on Feb 27, 2013 10:57 PMIf you have a cold backup (database shutdown properly) without the redo logs, change this :
CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOGto
CREATE CONTROLFILE REUSE DATABASE "XE" RESETLOGS NOARCHIVELOGYou have to change the NORESETLOGS to RESETLOGS for Oracle to recreate the online redo logs.
Hemant K Chitale -
SQL0964C The transaction log for the database is full
Hi,
i am planning to do QAS refresh from PRD system using Client export\import method. i have done export in PRD and the same has been moved to QAS then started imported.
DB Size:160gb
DB:DB2 9.7
os: windows 2008.
I facing SQL0964C The transaction log for the database is full issue while client import and regarding this i have raised incident to SAP then they replied to increase some parameter like(LOGPRIMARY,LOGSECOND,LOGFILSIZ) temporarily and revert them back after the import. Based on that i have increased from as below mentioned calculation.
the filesystem size of /db2/<SID>/log_dir should be greater than LOGFILSIZ*4*(Sum of LOGPRIMARY+LOGSECONDARY) KB
From:
Log file size (4KB) (LOGFILSIZ) = 60000
Number of primary log files (LOGPRIMARY) = 50
Number of secondary log files (LOGSECOND) = 100
Total drive space required: 33GB
To:
Log file size (4KB) (LOGFILSIZ) = 70000
Number of primary log files (LOGPRIMARY) = 60
Number of secondary log files (LOGSECOND) = 120
Total drive space required: 47GB
But still facing the same issue. Please help me to resolve the ASAP.
Last error TP log details:
3 ETW674Xstart import of "R3TRTABUFAGLFLEX08" ...
4 ETW000 1 entry from FAGLFLEX08 (210) deleted.
4 ETW000 1 entry for FAGLFLEX08 inserted (210*).
4 ETW675 end import of "R3TRTABUFAGLFLEX08".
3 ETW674Xstart import of "R3TRTABUFAGLFLEXA" ...
4 ETW000 [ dev trc,00000] Fri Jun 27 02:20:21 2014 -774509399 65811.628079
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 85 65811.628164
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 83 65811.628247
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 51 65811.628298
4 ETW000 [ dev trc,00000] &+
4 ETW000 67 65811.628365
4 ETW000 [ dev trc,00000] &+ DELETE FROM "FAGLFLEXA" WHERE "RCLNT" = ?
4 ETW000 62 65811.628427
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=8, reopt=0
4 ETW000 58 65811.628485
4 ETW000 [ dev trc,00000] &+
4 ETW000 53 65811.628538
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 52 65811.628590
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=6 P=9 S=0
4 ETW000 49 65811.628639
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628689
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.628738
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=6 "210" 34 65811.628772
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.628823
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628873
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 27 65811.628900
4 ETW000 [ dbtran ,00000] ***LOG BY4=>sql error -964 performing DEL on table FAGLFLEXA
4 ETW000 3428 65811.632328
4 ETW000 [ dbtran ,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.632374
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 59 65811.632433
4 ETW000 RSLT: {dbsl=99, tran=1}
4 ETW000 FHDR: {tab='FAGLFLEXA', fcode=194, mode=2, bpb=0, dbcnt=0, crsr=0,
4 ETW000 hold=0, keep=0, xfer=0, pkg=0, upto=0, init:b=0,
4 ETW000 init:p=0000000000000000, init:#=0, wa:p=0X00000000020290C0, wa:#=10000}
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 126 65811.632559
4 ETW000 STMT: {stmt:#=0, bndfld:#=1, prop=0, distinct=0,
4 ETW000 fld:#=0, alias:p=0000000000000000, fupd:#=0, tab:#=1, where:#=2,
4 ETW000 groupby:#=0, having:#=0, order:#=0, primary=0, hint:#=0}
4 ETW000 CRSR: {tab='', id=0, hold=0, prop=0, max.in@0=1, fae:blk=0,
4 ETW000 con:id=0, con:vndr=7, val=2,
4 ETW000 key:#=3, xfer=0, xin:#=0, row:#=0, upto=0, wa:p=0X00000001421A3000}
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
4 ETW690 COMMIT "14208" "-1"
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 16208 65811.648767
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 75 65811.648842
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 52 65811.648894
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.648945
4 ETW000 [ dev trc,00000] &+ INSERT INTO DDLOG (SYSTEMID, TIMESTAMP, NBLENGTH, NOTEBOOK) VALUES ( ? , CHAR( CURRENT TIMESTAMP - CURRENT TIME
4 ETW000 50 65811.648995
4 ETW000 [ dev trc,00000] &+ ZONE ), ?, ? )
4 ETW000 49 65811.649044
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=15, reopt=0
4 ETW000 55 65811.649099
4 ETW000 [ dev trc,00000] &+
4 ETW000 49 65811.649148
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 50 65811.649198
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=44 P=66 S=0
4 ETW000 47 65811.649245
4 ETW000 [ dev trc,00000] &+ 2 CT=SHORT T=SMALLINT L=2 P=2 S=0
4 ETW000 48 65811.649293
4 ETW000 [ dev trc,00000] &+ 3 CT=BINARY T=VARBINARY L=32000 P=32000 S=0
4 ETW000 47 65811.649340
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.649390
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.649439
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=14 "R3trans" 32 65811.649471
4 ETW000 [ dev trc,00000] &+ 2 SHORT I=2 12744 32 65811.649503
4 ETW000 [ dev trc,00000] &+ 3 BINARY I=12744 00600306003200300030003900300033003300310031003300320036003400390000...
4 ETW000 64 65811.649567
4 ETW000 [ dev trc,00000] &+
4 ETW000 52 65811.649619
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.649670
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 28 65811.649698
4 ETW000 [ dbsyntsp,00000] ***LOG BY4=>sql error -964 performing SEL on table DDLOG 36 65811.649734
4 ETW000 [ dbsyntsp,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.649780
4 ETW000 [ dbsync ,00000] ***LOG BZY=>unexpected return code 2 calling ins_ddlog 37 65811.649817
4 ETW000 [ dev trc,00000] db_syflush (TRUE) failed 26 65811.649843
4 ETW000 [ dev trc,00000] db_con_commit received error 1024 in before-commit action, returning 8
4 ETW000 57 65811.649900
4 ETW000 [ dbeh.c ,00000] *** ERROR => missing return code handler 1974 65811.651874
4 ETW000 caller does not handle code 1024 from dblink#5[321]
4 ETW000 ==> calling sap_dext to abort transaction
2EETW000 sap_dext called with msgnr "900":
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
1 ETP154 MAIN IMPORT
1 ETP110 end date and time : "20140627022021"
1 ETP111 exit code : "12"
1 ETP199 ######################################
Regards,
RajeshHi Babu,
I believe you should have taken a restart of your system if log primary are changed. If so, then increase log primary to 120 and secondary to 80 provide size and space are enough.
Note 1293475 - DB6: Transaction Log Full
Note 1308895 - DB6: File System for Transaction Log is Full
Note: 495297 - DB6: Monitoring transaction log
Regards,
Divyanshu -
Standby database is not applying redo logs due to missing archive log
We use 9.2.0.7 Oracle Database. My goal is to create a physical standby database.
I have followed all the steps necessary to fulfill this in Oracle Data Guard Concepts and Administration manual. Archived redo logs are transmitted from primary to standby database regularly. But the logs are not applied due to archive log gap.
SQL> select process, status from v$managed_standby;
PROCESS STATUS
ARCH CONNECTED
ARCH CONNECTED
MRP0 WAIT_FOR_GAP
RFS RECEIVING
RFS ATTACHED
SQL> select * from v$archive_gap;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
1 503 677
I have tried to find the missing archives on the primary database, but was unable to. They have been deleted (somehow) regularly by the existing backup policy on the primary database. I have looked up the backups, but these archive logs are too old to be in the backup. Backup retention policy is 1 redundant backup of each file. I didn't save older backups as I didn't really need them from up to this point.
I have cross checked (using rman crosscheck) the archive log copies on the primary database and deleted the "obsolete" copies of archive logs. But, v$archived_log view on the primary database only marked those entries as "deleted". Unfortunately, the standby database is still waiting for those logs to "close the gap" and doesn't apply the redo logs at all. I am reluctant to recreate the control file on the primary database as I'm afraid this occurred through the regular database backup operations, due to current backup retention policy and it probably might happen again.
The standby creation procedure was done by using the data files from 3 days ago. The archive logs which are "producing the gap" are older than a month, and are probably unneeded for standby recovery.
What shall I do?
Kind regards and thanks in advance,
MilivojOn a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
1 7 10
The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1.
After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary
database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND 2> SEQUENCE# BETWEEN 7 AND 10;
NAME
/primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';
After you register these log files on the physical standby database, you can restart Redo Apply.
Note:
The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
Restoring the archived logs from the backup set
If the archived logs are not available in the archive destination then at that time we need to restore the required archived logs from the backup step. This task is accomplished in the following way.
To restore range specified archived logs:
Run {
Set archivelog destination to '/oracle/arch/arch_restore'
Restore archivelog from logseq=<xxxxx> until logseq=<xxxxxxx>
To restore all the archived logs:
Run {
Set archivelog destination to '/oracle/arch/arch_restore';
Restore archivelog all;
} -
Backup Not Starting for 'Whole database offline + redo log backup' @ DB13
Hi Experts,
I am not able to perform 'Whole database offline + redo log backup' by DB13.
I have recently configured my 'init<SID>.sap' to take 'Whole database online + redo log backup' and its working perfectly fine.
I tried taking test backup for 'Whole database offline + redo log backup' but it didn't even started.
Thus I created another profile with name init<SID>back.sap and changed the Parameter
from 'backup_type = online' to 'backup_type = offline' and also tried by 'backup_type = offline_force'
rest all parameters being same as the profile init<SID>.sap
Kindly Suggest as I need to take set the backup Strategy as Mon-Fri -> 'Whole database offline + redo log backup' and Sat -> 'Whole database offline + redo log backup'
One more Query : While taking the redo log backup by DB13 why is it that some times it only saves the Files and some time it
saves and delete the files from the '/oracle/<SID>/oraarch' location. Please throw some light over this matter also.
Thanks,
JiteshHi Mr Bhavik,
Thanks for your reply.. Here are the details you have asked for.
1.My SAP BASIS Patch Level is : 10. ( We shall be updating it by the end of this Year)
2. Br*tools version is :
BRTOOLS 7.00 (11)
kernel release 700
patch level 11
3. I don't have any file with name alert<dbsid>.log file (located at /oracle/<SID>/saptrace/background/) but i do have alert_<SID>.log
I execute the command more -p G alert_JMD.log
after my 'Whole database offline + redo log backup' again failed at DB13 but I was not able to see any specific complains while executing the above action.
I got the Error Detailed Log in DB13 as :
Detail log: beeneedv.aft
BR0051I BRBACKUP 7.00 (20)
BR0055I Start of database backup: beeneedv.aft 2010-11-08 13.16.43
BR0484I BRBACKUP log file: /oracle/JMD/sapbackup/beeneedv.aft
BR0280I BRBACKUP time stamp: 2010-11-08 13.16.43
BR0261E BRBACKUP cancelled by signal 13
BR0056I End of database backup: beeneedv.aft 2010-11-08 13.16.44
BR0280I BRBACKUP time stamp: 2010-11-08 13.16.45
BR0054I BRBACKUP terminated with errors
4. No I have not yet Tried 'execute such Offline+REdo log backups using brback command', will Try and post it Definately
5. Query : select grantee, granted_role from dba_role_privs;
result :
SQL> select grantee, granted_role from dba_role_privs;
GRANTEE GRANTED_ROLE
SYS SAPDBA
SYS EXP_FULL_DATABASE
SYS CONNECT
IMP_FULL_DATABASE SELECT_CATALOG_ROLE
DBSNMP OEM_MONITOR
SAPSR3 CONNECT
OPS$SAPSERVICEJMD SAPDBA
SYS SELECT_CATALOG_ROLE
DBA DELETE_CATALOG_ROLE
DBA EXECUTE_CATALOG_ROLE
SYSTEM DBA
GRANTEE GRANTED_ROLE
OPS$ORAJMD SAPDBA
SAPDBA GATHER_SYSTEM_STATISTICS
SYS SCHEDULER_ADMIN
SYS AQ_USER_ROLE
SYS GATHER_SYSTEM_STATISTICS
SYS DELETE_CATALOG_ROLE
DBA GATHER_SYSTEM_STATISTICS
DBA IMP_FULL_DATABASE
EXECUTE_CATALOG_ROLE HS_ADMIN_ROLE
IMP_FULL_DATABASE EXECUTE_CATALOG_ROLE
OPS$JMDADM CONNECT
GRANTEE GRANTED_ROLE
SYS LOGSTDBY_ADMINISTRATOR
SYS EXECUTE_CATALOG_ROLE
SYS RESOURCE
DBA SCHEDULER_ADMIN
DBA SELECT_CATALOG_ROLE
EXP_FULL_DATABASE EXECUTE_CATALOG_ROLE
SAPDBA SELECT_CATALOG_ROLE
SYS SAPCONN
SYS OEM_ADVISOR
SYS IMP_FULL_DATABASE
SELECT_CATALOG_ROLE HS_ADMIN_ROLE
GRANTEE GRANTED_ROLE
OUTLN RESOURCE
LOGSTDBY_ADMINISTRATOR RESOURCE
SAPSR3 RESOURCE
OPS$SAPSERVICEJMD RESOURCE
SYS RECOVERY_CATALOG_OWNER
DBA EXP_FULL_DATABASE
EXP_FULL_DATABASE SELECT_CATALOG_ROLE
TSMSYS RESOURCE
OPS$ORAJMD RESOURCE
SAPCONN SELECT_CATALOG_ROLE
SYS OEM_MONITOR
GRANTEE GRANTED_ROLE
SYS AQ_ADMINISTRATOR_ROLE
SYS DBA
SYSTEM AQ_ADMINISTRATOR_ROLE
OPS$ORAJMD CONNECT
OPS$JMDADM SAPDBA
OPS$JMDADM RESOURCE
SAPSR3 SAPCONN
SYS HS_ADMIN_ROLE
SYSTEM SAPDBA
OPS$SAPSERVICEJMD CONNECT -
Recommended Third Party Archive and Purging Software for Oracle Database (Oracle CC&B)
Hi,
We are currently exploring third party archive and purging software for Oracle Database specifically used by an Oracle Customer Care and Billing implementation. Our current data is growing that we think archiving and purging is a now a viable option. Thanks.
Regards,
DennisAccording to the fine Oracle® Utilities Application Framework Administration Guide Chapter 17, "Archiving and Purging" is readily available to CC&B.
-
Estimate of Flashback logs to expect for my database
Hi all,
Please I need to know (roughly) the size of flahback logs to expect for my database.
My database is about 500G in size, grows at about 1% per month and I hope to keep flashback logs for 48 hrs.
If you currently use the flashback technology, let me know your database size, how many hours of flashback logs you keep and the size of your logs. If I get 2 or 3 responses, then I'll be able to make an intelligent guess for what my database flahback log size might be.
Regards
baffyThe value of the DB_FLASHBACK_RETENTION_TARGET parameter (1440
minutes in our example) determines how much flashback data the database should
retain in the form of flashback database logs in the flash recovery area. In order to
estimate the space you need to add to your flash recovery area for accommodating
the flashback database logs, first run the database for a while with the flashback
database feature turned on. Then run the following query:
SQL> select estimated_flashback_size, retention_target,
flashback_size from v$flashback_database_log;
ESTIMATED_FLASHBACK_SIZE RETENTION_TARGET FLASHBACK_SIZE
126418944 1440 152600576 -
Problem on archivelog for standby database
Hi,
I have an RAC architecture on linux in standard edition.
I build a standby database on an other server in standard edition.
I have build my standby database and open in read only but when i do some modifications in my rac i can't apply my archive logs because in
RAC i have the thread notion and the apply give me an error:
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/db/RSAPRODS/FRA/RSAPROD/archivelog/2011_03_30/thread_1_seq_166.354.747075275
ORA-00325: archived log for thread 2, wrong thread # 1 in header
ORA-00334: archived log:
I don't know how to do to generate an archive log for my standby.
Thanks$ oerr ora 325
00325, 00000, "archived log for thread %s, wrong thread # %s in header"
// *Cause: The archived log is corrupted or for another thread. Can not
// use the log for applying redo.
// *Action: Find correct archived log.Make sure you have transfered archived redo logs in binary mode if using ftp and that they are not corrupted : you can try to use cksum or md5sum on source archived redo log and on target archived redo log. -
Rename ORACLE_SID for existing database
Hi team,
Can anyone please let me know proper way to rename existing running oracle database SID?
What are various ways to approach the tasks?
Suugest me with RMAN and also traditional way of doing it!
Hoping for your best suggestions..
Thanks
regards
dkoracleYes rajesh i have changed both instance plus db name....that was the requirement,,
you gave me option of changing SID too..thanks
While editing control file i met with two options
1. with reset logs
2.with no reset logs
Since it was complete shutdown i hope no need to apply reset logs here. But changing the SID name on header of datafiles,control files will force us to open with reset logs
please correct me on this!!
see the content of my control file :
CREATE CONTROLFILE REUSE SET DATABASE "new<SID>" RESETLOGS ARCHIVELOG
MAXLOGFILES 32
MAXLOGMEMBERS 3
MAXDATAFILES 254
MAXINSTANCES 8
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '/u06/oradata/new<SID>/redo01a_new<SID>.dbf' SIZE 300M,
GROUP 2 '/u06/oradata/new<SID>/redo02a_new<SID>.dbf' SIZE 300M,
GROUP 3 '/u06/oradata/new<SID>/redo03a_new<SID>.dbf' SIZE 300M,
GROUP 4 '/u06/oradata/new<SID>/redo04a_new<SID>.dbf' SIZE 300M
-- STANDBY LOGFILE
DATAFILE
'/u06/oradata/new<SID>/system_new<SID>_01.dbf',
'/u06/oradata/new<SID>/undo_new<SID>_01.dbf',
'/u06/oradata/new<SID>/tools_new<SID>_01.dbf',
'/u06/oradata/new<SID>/users_new<SID>_01.dbf',
'/u06/oradata/new<SID>/indx_new<SID>_01.dbf',
'/u06/oradata/new<SID>/data101_new<SID>_01.dbf',
'/u06/oradata/new<SID>/indx101_new<SID>_01.dbf',
'/u06/oradata/new<SID>/data201_new<SID>_01.dbf',
'/u06/oradata/new<SID>/indx201_new<SID>_01.dbf',
'/u06/oradata/new<SID>/data301_new<SID>_01.dbf',
'/u06/oradata/new<SID>/indx301_new<SID>_01.dbf',
'/u06/oradata/new<SID>/data401_new<SID>_01.dbf',
'/u06/oradata/new<SID>/indx401_new<SID>_01.dbf',
'/u06/oradata/new<SID>/data101_new<SID>_02.dbf',
'/u06/oradata/new<SID>/users_new<SID>_02.dbf',
'/u06/oradata/new<SID>/users_new<SID>_03.dbf',
'/u06/oradata/new<SID>/users_new<SID>_04.dbf',
'/u06/oradata/new<SID>/data401_new<SID>_02.dbf',
'/u06/oradata/new<SID>/data201_new<SID>_02.dbf',
'/u06/oradata/new<SID>/undo_new<SID>_02.dbf',
'/u06/oradata/new<SID>/data301_new<SID>_02.dbf',
'/u06/oradata/new<SID>/indx301_new<SID>_02.dbf',
'/u06/oradata/new<SID>/indx101_new<SID>_02.dbf',
'/u06/oradata/new<SID>/data101_new<SID>_03.dbf',
'/u06/oradata/new<SID>/indx201_new<SID>_02.dbf',
'/u06/oradata/new<SID>/indx101_new<SID>_03.dbf',
'/u06/oradata/new<SID>/sysaux_new<SID>_01.dbf'
CHARACTER SET UTF8;
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE '/u06/oradata/new<SID>/temp_01.dbf'
SIZE 2000M REUSE AUTOEXTEND OFF;
After once database opened i changed mode to no archive log..... -
DR startegy for Large database
Hi All,
We have a 30TB database for which we need to design a backup strategy.( Oracle 11gR1 SE, 2 Node RAC with ASM)
Client needs a DR site for the database and from the DR site we will be running tape backup.
The main constraint we are facing here are size of DB which will grow till 50 TB in future and another is we are running in Oracle standard edition.
Taking a full RMAN backup to a SAN box will take around 1 week for us for a DB size of 30TB.
Options for us:
1. Create a manual standby and apply archive logs( We cant use Dataguard as we are in SE edition)
2. Storage level replication ( Using HP Continous access)
3. Use thrid party tools such as Shareplex,Golden gate, DBVisit etc
Which one will be the best option here with respect to cost and time or do we have any other option better than this.
We cant upgrade to Oracle EE edition as of now since we need to meet the project deadline for Client. We are migrating legacy data to production now and this will be interrupted if we go for a upgrade.
Thanks in advance.
Arun
Edited by: user12107367 on Feb 26, 2011 7:47 AM
Modified the heading from Backup to DRArun,
Yes this limitation about BCT is problematic in SE but after all if everything was included in SE who would pay the EE licence :) ?.
Only good thing if BCT is not in use is that RMAN checks the whole database for corruption even if the backup is an incremental one. There is no miraculous "full Oracle" solution if your backups are so slow but as you mentioned the manual standby with delayed periodic applications of the archives is possible. It's up to you to evaluate if if works in your case though : how many archive log files will you daily generate and how long will it take to apply them on your environment ?
(notice about Golden Gate it's no more a third party tool : it's now an Oracle tool and it is clearly introduced as a recommended replacement for Streams)
Best regards
Phil -
Data Guard configuration for RAC database disappeared from Grid control
Primary Database Environment - Three node cluster
RAC Database 10.2.0.1.0
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Standby Database Environment - one Node database
Oracle Enterprise Edition 10.2.0.1.0 Single standby
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Grid Control 10.2.0.1.0 - Node separate from standby and cluster environments
Oracle 10.1.0.1.0
Grid Control 10.2.0.1.0
Red Hat 4.0 2.6.9-22 32bit
After adding a logical standby database through Grid Control for a RAC database, I noticed sometime later the Data Guard configuration disappeared from Grid Control. Not sure why but it is gone. I did notice that something went wrong with the standby creation but i did not get much feedback from Grid Control. The last thing I did was to view the configuration, see output below.
Initializing
Connected to instance qdcls0427:ELCDV3
Starting alert log monitor...
Updating Data Guard link on database homepage...
Data Protection Settings:
Protection mode : Maximum Performance
Log Transport Mode settings:
ELCDV.qdx.com: ARCH
ELXDV: ARCH
Checking standby redo log files.....OK
Checking Data Guard status
ELCDV.qdx.com : ORA-16809: multiple warnings detected for the database
ELXDV : Creation status unknown
Checking Inconsistent Properties
Checking agent status
ELCDV.qdx.com
qdcls0387.qdx.com ... OK
qdcls0388.qdx.com ... OK
qdcls0427.qdx.com ... OK
ELXDV ... WARNING: No credentials available for target ELXDV
Attempting agent ping ... OK
Switching log file 672.Done
WARNING: Skipping check for applied log on ELXDV : disabled
Processing completed.
Here are the steps followed to add the standby database in Grid Control
Maintenance tab
Setup and Manage Data Guard
Logged in as sys
Add standby database
Create a new logical standby database
Perform a live backup of the primary database
Specify backup directory for staging area
Specify standby database name and Oracle home location
Specify file location staging area on standby node
At the end am presented with a review of the selected options and then the standby database is created
Has any body come across a similar issue?
Thanks,Any resolution on this?
I just created a Logical Standby database and I'm getting the same warning (WARNING: No credentials available for target ...) when I do a 'Verify Configuration' from the Data Guard page.
Everything else seems to be working fine. Logs are being applied, etc.
I can't figure out what credentials its looking for.
Maybe you are looking for
-
How can multiple versions of the appserver be used on a Solaris server...
You can use the file-based versions of the Sun Java System Application Server to be co-located with any packaged based version or other file based version(s). The file based version is a full set of appserver components that reside under an install r
-
I used Firefox for months. For the past several weeks it crashes all the time.
I can enter through Safe Mode. I have tried troubleshooting suggestions but have not found anything that works. Can't update plug-ins. Get an error 651 message. Can't update other programs (e.g, Iobit Malware Fighter; Advanced System Care) and can't
-
Group membership redirect with AD LDS
I have my LDS instance synchronized with my AD environment. I am only synchronizing objectClass=user. I am creating userProxyFull objects on the LDS side. Authentication is working as I have tested authenticating with an AD user to LDS. My applicatio
-
How to avoid printing zero in amount field in adobe forms
Hi All, If the amount is empty in the amount field then it is printing 0,00. I have written some peice of code to avoid printing that value. I wanted to print space. Below is the logic which i have tried in javascript. if ( this.WRSHB.rawValue ==
-
Split database file into multi files on different drives.
Hi, I Have a large database file .mdf that eating my drive disk space. So i have installed another disk. Now I have 2 drives (other than OS drive) one has the .mdf file and almost full and the new drive which is empty. I need to know How to split the