Oracle 10g ASM converting noarchive log to archive log
DATABASE Details
Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
Mode
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 26269
Current log sequence 26274
SQL>
TYPE :
DW Batch Process
Redo log groups
6 groups with 2 files size each 2GB Aprx. 400gb redo log generates per day
OS : WINDOWS 2008 SERVER
Current Size
SQL> select sum(bytes)/(1024*1024*1024) from v$datafile;
SUM(BYTES)/(1024*1024*1024)
1003.86945
ASM Details
SELECT GROUP_NUMBER,NAME FROM V$ASM_DISKGROUP;
GROUP_NUMBER NAME
1 KK_DATA
SQL> SELECT dg.name AS diskgroup, SUBSTR(c.instance_name,1,12) AS instance,
2 SUBSTR(c.db_name,1,12) AS dbname, SUBSTR(c.SOFTWARE_VERSION,1,12) AS software,
3 SUBSTR(c.COMPATIBLE_VERSION,1,12) AS compatible
4 FROM V$ASM_DISKGROUP dg, V$ASM_CLIENT c
5 WHERE dg.group_number = c.group_number;
DISKGROUP INSTANCE DBNAME SOFTWARE COMPATIBLE
KK_DATA +asm KK 10.2.0.4.0 10.2.0.0.0
Currently DB in noarchive mode, Need to put in archive Mode
What are additional precautions need to take in case of ASM for managing archive mode
I would recommend creating a new ASM diskgroup and assign Flash recovery area to this group. This is part of ASM best practices as in case your Data diskgroup goes, you can use RMAN backups/archivelogs for recovery purpose.
-Amit
http://askdba.org/weblog/
Similar Messages
-
Create a new user for oracle 10G ASM instance with sysdba system privilege
Hi,
In our Golden Gate Project, we require the SYS user credential to connect to the Oracle 10g ASM instance to read the database transaction logs.But our client is not providing the SYS user credential to connnect to ASM instance.
I'm getting the error message "ORA-01109: database not open",When I tried to create a new user using the below the steps in oracle 10g ASM instance
1. Login using "sqlplus / as sysdba"
2. Create user <username> identified by <password>;
But in oracle 11g ASM instance, I'm able to create new user by connecting the ASM instance with SYSASM role without issues.
Is there is any workaround to create a new user with sysdba system privilege in oracle 10g ASM instance?.
Thanks in advance .Hi,
Recreate the password file for the ASM instance as follows:
Unix:
orapwd file=<ORACLE_HOME>/dbs/PWD<SID> password=<sys_password>
Windows:
orapwd file=<ORACLE_HOME>/database/PWD<SID>.ora password=<sys_password>
Now sys password is reset, we are ready to use sys for ASM management. I decided to create another user ASMDBA as I tried above.
SQL> create user ASMDBA identified by test01;
User created.
SQL> grant SYSASM, SYSOPER to ASMDBA;
Grant succeeded.
SQL> select * from v$pwfile_users;
USERNAME SYSDBA SYSOPE SYSASM
SYS TRUE TRUE TRUE
ASMDBA FALSE TRUE TRUE
Please see this link : http://orachat.com/how-to-change-asm-sys-password-creating-sysasm-user-11g/
Thank you -
How to delete client from oracle 10g ASM
Hi,
I would like to know how to delete client from oracle 10g ASM, while configuring Dataguard in same windows os i have by mistake created the standby in ASM.
However later i have corrected the datafile destination. My Primary database is in ASM and the standby in disk.
Could anyone please tell me how do i delete the client from ASM. On view from asmcmd it shows db_unknown and from v$asm_client
GROUP_NUMBER INSTANCE_NAME
DB_NAME STATUS
SOFTWARE_VERSION
COMPATIBLE_VERSION
1 stby
stby CONNECTED
10.2.0.1.0
10.2.0.1.0
Thanks and regards,
shaanHi Deepak,
Pls see the result from the view, i wanted to delete stby
SQL> select * from v$asm_client;
GROUP_NUMBER INSTANCE_NAME
DB_NAME STATUS
SOFTWARE_VERSION
COMPATIBLE_VERSION
1 prim
prim CONNECTED
10.2.0.1.0
10.2.0.1.0
2 prim
prim CONNECTED
10.2.0.1.0
10.2.0.1.0
1 stby
stby CONNECTED
10.2.0.1.0
10.2.0.1.0
regards,
shaan -
How do I map Hitachi SAN LUNs to Solaris 10 and Oracle 10g ASM?
Hi all,
I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
My question is this:
How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
I cannot find this critical piece of information ANYWHERE!!!!
Thanks for your help!Yes that is correct however due to use of Solaris 10 MPxIO multipathing software that we are using with the Hitachi SAN it does present an extra layer of complexity and issues with ASM configuration. Which means that ASM may get confused when it attempts to find the new LUNs from the Hitachi SAN at the Solaris OS level. Oracle Metalink note 396015.1 states this issue.
So my question is this: how to configure the ASM instance initialization parameter asm_diskstring to recognize the new Hitachi LUNs presented to the Solaris 10 host?
Lets say that I have the following new LUNs:
/dev/rdsk/c7t1d1s6
/dev/rdsk/c7t1d2s6
/dev/rdsk/c7t1d3s6
/dev/rdsk/c7t1d4s6
Would I set the ASM initialization parameter for asm_diskstring to /dev/rdsk/c7t1d*s6
as correct setting so that the ASM instance recognizes my new Hitachi LUNs? Solaris needs to map these LUNs using pseudo devices in the Solaris OS for ASM to recognize the new disks.
How would I set this up in Solaris 10 with Sun multipathing (MPxIO) and Oracle 10g RAC ASM?
I want to get this right to avoid the dreaded ORA-15072 errors when creating a diskgroup with external redundancy for the Oracle 10g RAC ASM installation process. -
When occurs crash recovery,why use active online redo log not archived log?
If current redo log had archived, but it's still 'ACTIVE'. As we all know, archived log is just an archived copy of the current redo log which is still 'ACTIVE', they have the same data. But why use active online redo log not archived log for crash recovery?(I think, if crash recovery can use archived log, then whether the online redo log is 'ACTIVE' or not, it can be overwritten)
Quote:
Re: v$log : How redo log file can have a status ACTIVE and be already archived?
Hemant K Chitale
If your instance crashes, Oracle attempts Instance Recovery -- reading from the Online Redo Logs. It doesn't need ArchiveLogs for Instance Recovery.
TanelPoder
Whether the log is already archived or not doesn't matter here, when the instance crashes, Oracle needs some blocks from that redolog. Archivelog is just an archived copy of the redolog, so you could use either the online or achive log for the recovery, it's the same data in there (Oracle reads the log/archivelog file header when it tries to use it for recovery and validates whether it contains the changes (RBA range) in it what it needs).Aman.... wrote:
John,
Are you sure that the instance recovery (not the media recovery) would be using the archived redo logs? Since the only thing that would be lost is the isntance, there wouldn't be any archived redo log generated from the Current redo log and the previous archived redo logs, would be already checkpointed to the data file, IMHO archived redo logs won't participate in the instance recovery process. Yep, shall watch the video but tomorrow .
Regards
Aman....
That's what I said. Or meant to say. If Oracle used archivelogs for instance recovery, it would not be possible to recover in noarchive log mode. So recovery relies exclusively on the online log.
Sorry I wasted your time, I'll try to be less ambiguous in future -
[help me] Oracle 10G + ASM "ORA-00376: file 5 cannot be read at this time"
We have used Oracle 10G R2 RAC + ASM on Redhat 4 (EMC cx700 Storage)
I found below errors on alert log and can't inserted, updated and deleted datas in database.
Sun May 27 01:12:34 2007
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
Sun May 27 01:19:11 2007
Errors in file /oracle/product/admin/DB/udump/db3_ora_15854.trc:
ORA-00376: file 5 cannot read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00372: file 5 cannot be modified at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
So:
I checked and recovered data file
SQL> select name,status,file# from v$datafile where status ='RECOVER';
NAME
STATUS FILE#
+DATA/db/datafile/undotbs3.257.617849281
RECOVER 5
RMAN> run {
allocate channel t1 type 'SBT_TAPE';
allocate channel t2 type DISK;
recover datafile 5;
recover completed.
SQL> alter database datafile 5 online;
Butttttttttttttttttt:
What is going on?
I checked EMC storage, not found any disk error.
I checked alert log of ASM, no found anything.
I don't know What the problem ?
Because I had the same problem 2 days ago.
This day error on undo datafile node 3.
2 days aGo; error on undo node 4.
Please please anybody
What bug or anything wrong ?
Please introduce me
trace file:
/oracle/product/admin/DB/udump/db3_ora_15854.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
ORACLE_HOME = /oracle/product/10.2.0/db
System name: Linux
Node name: db03.domain
Release: 2.6.9-42.ELsmp
Version: #1 SMP Wed Jul 12 23:32:02 EDT 2006
Machine: x86_64
Instance name: DB3
Redo thread mounted by this instance: 3
Oracle process number: 65
Unix process pid: 15854, image: [email protected]
*** SERVICE NAME:(DB) 2007-05-27 01:19:11.574
*** SESSION ID:(591.62658) 2007-05-27 01:19:11.574
*** 2007-05-27 01:19:11.574
ksedmp: internal or fatal error
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00372: file 5 cannot be modified at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
Current SQL statement for this session:
INSERT INTO DATA_ALL VALUES (:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 ,:B9 ,:B10 ,:B11 ,:B12 ,:B13 ,:B14 ,:B15 ,:B16 ,:B17 ,:B18 ,:B19 ,:B20 ,:B21 ,:B22 ,:B23 ,:B24 ,:B25 ,:B26 ,:B27 ,:B28 ,:B29 ,:B30 ,:B31 ,:B32 ,:B33 ,:B34 ,:B35 ,:B36 ,:B37 ,:B38 ,:B39 ,:B40 ,:B41 ,:B42 ,:B43 ,:B44 ,:B45 ,:B46 ,:B47 ,:B48 ,:B49 ,:B50 )
----- PL/SQL Call Stack -----
object line object
handle number name
0x21dc2b4b8 780 package body MGR.AC
0x21e4815b0 3 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst()+31 call ksedst1() 000000000 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
ksedmp()+610 call ksedst() 000000000 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
ksupop()+3581 call ksedmp() 000000003 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
opiodr()+3899 call ksupop() 000000002 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
rpidrus()+198 call opiodr() 000000066 ? 000000006 ?
.......etc.............................
Message was edited by:
HunterX (Surachart Opun)To me it looks like your undotbs on node3 is filled and not marking old undo as expired. Use this query to find out if it is labeling old undo as expired.
SELECT tablespace_name, status, COUNT(*) AS HOW_MANY
FROM dba_undo_extents
GROUP BY tablespace_name, status;
Another thing I noticed from your alertlog is it is only doing that on undotbs3, which I assumes is on node3.
1) try recreate undotbs on node3
2) take node3 out of service (stop nodeapps, ASM, instance and finally CRS on node3) and see if you can do DML on your database.
-Moid -
Oracle 10G Checkpoint not complete Cannot allocate new log
We have an oracle 10G database that has 4 groups of 200M redo logs. We are constantly seeing Checkpoint not complete followed by Cannot allocate new log.. messages in the alert log. When I look at the v$instance_recovery table I see this:
SQL> select WRITES_MTTR, WRITES_LOGFILE_SIZE, WRITES_LOG_CHECKPOINT_SETTINGS, WRITES_OTHER_SETTINGS,
2 WRITES_AUTOTUNE, WRITES_FULL_THREAD_CKPT from v$instance_recovery;
WRITES_MTTR WRITES_LOGFILE_SIZE WRITES_LOG_CHECKPOINT_SETTINGS
WRITES_OTHER_SETTINGS WRITES_AUTOTUNE WRITES_FULL_THREAD_CKPT
0 0 66329842
0 309461 41004
It seems most of our checkpointing is being caused by the log_checkpoint_interval being set to 10000, which means we are doing 20 checkpoints during one redo log since our OS blocksize is 1024. What I'm thinking about doing is first adding two more redo log groups, and then either increasing the log_checkpoint_interval to 40000 (5 checkpoints for one redo log) or unsetting log_checkpoint_interval and setting fast_start_mttr_target to 600. Which would be the recommended parameter change? I think Oracle recommends using the fast_start_mttr_target. Are there any other suggestions?Hi
>>unsetting log_checkpoint_interval and setting fast_start_mttr_target to 600.
its a better idea,look on it
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm#REFRN10058
Just set fast_start_mttr_target parameter.it will resolve ur problem.
Thanks
Kuljeet -
Oracle 10g ASM and RAC configuration
Hi all,
I want to ask to everybody something about Oracle 10g RAC and ASm configuration. We plan to migrate to Oracle 10g from 9i, and we will begin configuring oracle but we have to decide which configuration are the best.
Our materials are bellow:
Hardware: RP 3440 (HP)
OS : HPUX 11i Ver 1
Storage: EVA 4000 (eva disk group)
The problem is:
Our supplier recommand us to use HP serviguard + HP serviceguard extension for RAC+ RAc and Raw device as configuration.
But we want to use Oracekl Clusterware + RAC + ASM
My question is if anybody know what is the best configuration, we want to use ASm.
Can HP serviguard use ASM.
Some documentations or link explain oracle RAC and ASM configuration will be appreciate.
Thanks for your help.
Regards.
raitsarevoHi,
I want to use RAC for clustering. My shared disk is a NetApp Filer FS250 that can only be mounted throught NFS. RAW devices have to be excluded, alsi OCFS i think works similar to raw devices and need to see phisical disk to create partition. ASM instead can work on an NFS mount?
Ste
Visit http://www.stefanocislaghi.it/ -
Oracle Restore from Online backup without any archive logs
Hi,
May be a dumb question. If I have a good online backup (say it took 2 hours to do that. And there is database activity while backup is going on), and lost all the archive logs happened after the online backup, is it possible at all to do the restore using that backup? complete or incomplete? and bring back the database to normal operation mode? Some details on this.
Thanks.Let us see the reasoning behind this.
Database:WORLDDB
WORLDDB configuration:-
TBSP_WDB
->TBSP_WDB_01.dbf
->TBSP_WDB_02.dbf
SYSTEM
->SYSTEM.dbf
USERS
and so on for the tablespaces.
CASE 1:-
User managed backup.
You issue
ALTER TABLESPACE TBSP_WDB BEGIN BACKUP;
DB keeps working all transactions are recorded in the redo stream, the scn information is not updated in the file header.
Also note that the other tablespaces were been continuously worked on hence their scn numbers are a moving target.
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96572/osbackups.htm#10012
So if you begin tablespace TBSP_WDB backup at time t1 and by the time you are done with your backups you are at time t2, you will need all of the archive log's between time t1 and t2.
CASE 2:-
If we do with rman, rman takes care of the file header update information and goes on its merry way, those details are hidden from the end user. If one of the tablespace is fractured, updated while backups are going on , rman knows about it and will re-read the affected blocks in question.
My thinking would be that you might need archive logs, unless it is a cold backup, with switch logfiles in between.
If you do hot backups you need archivelogs is my thought. The number of archivelogs required will be decreased considerably through rman.
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96566/rcmconc1.htm#458821
Summary:-
Either way archivelogs are needed. -
Step by Step Oracle 10g ASM Installation on Linux
Hello,
Can any body provide me link for step by step 10g ASM Installation on linux (i.e from disk partitioning to final configuration). I searched on GooGle but didn't find ASM alone, though there were many step by step for RAC environment. I want to first learn ASM alone. Please help me as I'm first time learning ASM.
Looking for Kind reply
Regards,
AbbasiYou should check the oracle learning library for any such doubts,
http://apex.oracle.com/pls/apex/f?p=9830:28:0::NO:RIR:IR_PRODUCT,IR_PRODUCT_SUITE,IR_PRODUCT_COMPONENT,IR_RELEASE,IR_TYPE,IRC_ROWFILTER,IR_FUNCTIONAL_CATEGORY:,,,,,automatic%20storage%20management,
HTH
Aman.... -
Archive log stop/archive log start
Hi,
Kindly explain waht is
>alter system archive log stop; and
>alter system archive log start;
Regards<
Mathewhttp://download.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
-
Standby database Archive log destination confusion
Hi All,
I need your help here..
This is the first time that this situation is arising. We had sync issues in the oracle 10g standby database prior to this archive log destination confusion.So we rebuilt the standby to overcome this sync issue. But ever since then the archive logs in the standby database are moving to two different locations.
The spfile entries are provided below:
*.log_archive_dest_1='LOCATION=/m99/oradata/MARDB/archive/'
*.standby_archive_dest='/m99/oradata/MARDB/standby'
Prior to rebuilding the standby databases the archive logs were moving to /m99/oradata/MARDB/archive/ location which is the correct location. But now the archive logs are moving to both /m99/oradata/MARDB/archive/ and /m99/oradata/MARDB/standby location, with the majority of them moving to /m99/oradata/MARDB/standby location. This is pretty unusual.
The archives in the production are moving to /m99/oradata/MARDB/archive/ location itself.
Could you kindly help me overcome this issue.
Regards,
DanHi Anurag,
Thank you for update.
Prior to rebuilding the standby database the standby_archive_dest was set as it is. No modifications were made to the archive destination locations.
The primary and standby databases are on different servers and dataguard is used to transfer the files.
I wanted to highlight one more point here, The archive locations are similar to the ones i mentioned for the other stndby databases. But the archive logs are moving only to /archive location and not to the /standby location. -
Oracle archive log compression
Dear All,
Please help me how I compress oracle archive log in 10g.
Thanks,
Manasmanas wrote:
Dear All,
Please help me how I compress oracle archive log in 10g.
Thanks,
ManasUse RMAN by Compressed option (or) with Compressed and delete input option
RMAN> backup as compressed backupset archivelog all;
Starting backup at 23-FEB-12
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=72 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=16 RECID=11 STAMP=775778430
input archived log thread=1 sequence=17 RECID=12 STAMP=775819218
input archived log thread=1 sequence=18 RECID=13 STAMP=775835555
input archived log thread=1 sequence=19 RECID=14 STAMP=775835555
channel ORA_DISK_1: starting piece 1 at 23-FEB-12
channel ORA_DISK_1: finished piece 1 at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2012_02_23\O1_MF_ANNNN_TAG20120223T115447_7NCPXJBF_.BKP tag=TAG20120223T115447 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1 RECID=15 STAMP=775847193
input archived log thread=1 sequence=2 RECID=16 STAMP=775847245
input archived log thread=1 sequence=3 RECID=17 STAMP=775847246
input archived log thread=1 sequence=4 RECID=18 STAMP=775864847
input archived log thread=1 sequence=5 RECID=19 STAMP=775906825
input archived log thread=1 sequence=6 RECID=20 STAMP=775906906
input archived log thread=1 sequence=7 RECID=21 STAMP=776001285
channel ORA_DISK_1: starting piece 1 at 23-FEB-12
channel ORA_DISK_1: finished piece 1 at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2012_02_23\O1_MF_ANNNN_TAG20120223T115447_7NCPXQS2_.BKP tag=TAG20120223T115447 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 23-FEB-12
Starting Control File and SPFILE Autobackup at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\AUTOBACKUP\2012_02_23\O1_MF_S_776001303_7NCPY044_.BKP comment=NONE
Finished Control File and SPFILE Autobackup at 23-FEB-12
RMAN>
=====================
RMAN> backup as compressed backupset archivelog all delete input;
Starting backup at 23-FEB-12
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=9 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=16 RECID=11 STAMP=775778430
input archived log thread=1 sequence=17 RECID=12 STAMP=775819218
input archived log thread=1 sequence=18 RECID=13 STAMP=775835555
input archived log thread=1 sequence=19 RECID=14 STAMP=775835555
channel ORA_DISK_1: starting piece 1 at 23-FEB-12
channel ORA_DISK_1: finished piece 1 at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2012_02_23\O1_MF_ANNNN_TAG20120223T115645_7NCQ15OV_.BKP tag=TAG20120223T115645 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_20\O1_MF_1_16_7N4X928H_.ARC RECID=11 STAMP=775778430
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_17_7N653S9L_.ARC RECID=12 STAMP=775819218
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_18_7N6MQBWS_.ARC RECID=13 STAMP=775835555
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_19_7N6NXV4M_.ARC RECID=14 STAMP=775835555
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1 RECID=15 STAMP=775847193
input archived log thread=1 sequence=2 RECID=16 STAMP=775847245
input archived log thread=1 sequence=3 RECID=17 STAMP=775847246
input archived log thread=1 sequence=4 RECID=18 STAMP=775864847
input archived log thread=1 sequence=5 RECID=19 STAMP=775906825
input archived log thread=1 sequence=6 RECID=20 STAMP=775906906
input archived log thread=1 sequence=7 RECID=21 STAMP=776001285
input archived log thread=1 sequence=8 RECID=22 STAMP=776001403
channel ORA_DISK_1: starting piece 1 at 23-FEB-12
channel ORA_DISK_1: finished piece 1 at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2012_02_23\O1_MF_ANNNN_TAG20120223T115645_7NCQ1GLP_.BKP tag=TAG20120223T115645 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_1_7N70G0F6_.ARC RECID=15 STAMP=775847193
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_2_7N70HOHF_.ARC RECID=16 STAMP=775847245
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_3_7N70HP3O_.ARC RECID=17 STAMP=775847246
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_21\O1_MF_1_4_7N7KONB8_.ARC RECID=18 STAMP=775864847
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_22\O1_MF_1_5_7N8TOK3Q_.ARC RECID=19 STAMP=775906825
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_22\O1_MF_1_6_7N8TR2S1_.ARC RECID=20 STAMP=775906906
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_23\O1_MF_1_7_7NCPXCKL_.ARC RECID=21 STAMP=776001285
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2012_02_23\O1_MF_1_8_7NCQ13VV_.ARC RECID=22 STAMP=776001403
Finished backup at 23-FEB-12
Starting Control File and SPFILE Autobackup at 23-FEB-12
piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL\AUTOBACKUP\2012_02_23\O1_MF_S_776001423_7NCQ1RFZ_.BKP comment=NONE
Finished Control File and SPFILE Autobackup at 23-FEB-12
RMAN>Edited by: CKPT on Feb 23, 2012 11:57 AM -
Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?
Hi all,
I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
My question is this:
How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
I cannot find this critical piece of information ANYWHERE!!!!
Thanks for your help!You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
Tim
--- -
How to recover datafile in Oralcle 10g...? No backups and No archive log
All,
I need to recover the datafile 2 which is for undo tablespace and it is in recover state and i need to recover the data files now .
But the bad thing is We dont have backup at all and we dont have archive logs (Archive log disabled in the database)...
In this situation how can i recover the datafile ...?
SQL> select a.file#,a.name,a.status from v$datafile a,v$tablespace b where a.ts#=b.ts#;
FILE# NAME STATUS
1 /export/home/oracle/flexcube/product/10.2.0/db_1/oradata/bwfcc73/system01.dbf SYSTEM
*2 /export/home/oracle/logs/bw/undotbs01.dbf RECOVER*
3 /export/home/oracle/flexcube/product/10.2.0/db_1/oradata/bwfcc73/sysaux01.dbf ONLINE
4 /export/home/oracle/datafiles/bw/bwfcc73.dbf ONLINE
5 /export/home/oracle/datafiles/bw/bwfcc73_01.dbf ONLINE
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4940
Current log sequence 4942Hi,
First of all you must Open a ticket with oracle Support and explore the options
You can use this note to fix it:
RECOVERING FROM A LOST DATAFILE IN A UNDO TABLESPACE [ID 1013221.6]
If you is Unable to Drop Undo tablespace Since Undo Segment is in Needs Recovery
You can Upload the following trace file while opening the ticket
SQL>Alter session set tracefile_identifier='corrupt';
SQL>Alter system dump undo header "<new of undo segment in recover status>";
Go to udump
ls -lrt *corrupt*
Upload this trace file
Also upload the alert log fileRegards,
Levi Pereira
Edited by: Levi Pereira on Nov 29, 2011 1:58 PM
Maybe you are looking for
-
Failing to get Wireless Internet Access. Please Help!
Happy new year everyone! OK so here's my problem: I'm trying to use the Airport connection with a linksys WRT54GS wireless router and I'm unable to access the internet. The Airport recognizes the wireless network, says its connected, but I still can'
-
How do I sync non iTunes content to my iOS devices with Mavericks
For the last three days I have been pulling my hair out wondering why I can't sync Movies or TV Shows to my iPad using either the USB cable or WiFi in iTunes since installing Mavericks. the only way I have succeed at all is to redownload items alread
-
BI PUBLISHER 11G : Automatic Filtering Doubt
Hello. I have one pivot table with same data and one data table with other data from different dataset. Both dataset are joined. What i want to do is when i run the report the data from pivot table is showed and the data from data table is not showed
-
I deleted itunes but can't delete Quicktime.It tells me : "Buffer overrun detected! Program:C:\Program Files\QuickTime\QuickTimePlayer.exe A buffer overrun has been detacted which has corrupted the program's internal state. The problem cannot safely
-
I just bought a new macbook pro and my CS6 migrated, but now when I open existing documents, it launches a free trial version. There doesn't appear to be a way to enter my serial number or communicate that I simply have a new computer. HELP! There is