Data guard directory structure query
my OS IS WINDOWS2003
ORACLE VERSION 10GR1
i have a query regarding directory structure.What i did is i kept exactly the same directory structure on both primary and standby..please clear me on that
Its what i did
Primary SID- PROD
Directory--- D:\oracle\product\10.2.0\PROD\abc.dbf(all datafiles,redo,control are in this directory)
Archive location on Primary D:\PROD\ARCHIVE\.arc(all archives)
Now what i did on standby.i created different sid--PROD2
But i have kept exactly the same directory structure on standby too as
datafiles in D:\oracle\product\10.2.0\PROD\abc.dbf(all datafiles)
Archive location on standby D:\PROD\ARCHIVE\.arc(all archives).
As i kept the same directory structure so i didnot set parameters db_file_name_convert and log_file _name_convert
Please clear me on this before i move forward to check for archive shipping because i get confused that may be i followed wrong and i should have kept as
Standby datafiles = d:\oracle\product\10.2.0\prod2\.dbf
standby archives = d:\prod2\archive
please tell.i really need your reply urgently
can this approach of same directory structure affect the primary in bad way or is that fine
please clear me on my above note
i have taken the same directory structure as it was on primary even the sid name in directory is same.
PRIMARY SID= PROD
STANDBY SID =PROD2
but datafiles,redo,control and archive location i have kept the same sid as of primary.
D:\oracle\product\10.2.0\PROD\.dbf
D:\oracle\product\10.2.0\PROD\.ctl
D:\oracle\product\10.2.0\PROD\.redo
i just changed control file with standby and also i did not created standby redo's as i read in document its needed for switchover.so i thought i 'll create later on.
i am getting error in trace file of standby
0ra-19527 physical standby redo log must be renamed
ora-00312 online log 1 thread 1: D:\oracle\product\10.2.0\oradata\prod\reo01.log
error 19527 creating /clearing online redo logfile 1
Similar Messages
-
Oracle11g R2 Active Data guard using ASM Problem?
I have configured oracle11g r2 RAC on 2 notes using ASM Grid ( OS unix).
RAC is up and running.
Now I am configuring Active data Guard.
Under grid user instance +ASM and listener is running.
Under oracle user static listener is running.
All disk is mounted.
Oracle RAC and Data Guard directory and structure I have keeped same.
Now my problem is below:
$ ./rman target sys/HPinvent123nbl@dcpdb AUXILIARY sys/HPinvent123nbl@drpdb
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Jan 16 16:28:32 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: DCPDB (DBID=316773134)
connected to auxiliary database: DRPDB (not mounted)
RMAN> duplicate target database for standby from active database;
Starting Duplicate Db at 16-JAN-13
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=5644 device type=DISK
contents of Memory Script:
backup as copy reuse
targetfile '/u02/app/oracle/product/11.2.0/dbhome_1/dbs/orapwdcpdb1' auxiliary format
'/u02/app/oracle/product/11.2.0/dbhome_1/dbs/orapwdrpdb' ;
executing Memory Script
Starting backup at 16-JAN-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1897 instance=dcpdb1 device type=DISK
Finished backup at 16-JAN-13
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 01/16/2013 16:28:48
RMAN-06136: ORACLE error from auxiliary database: ORA-00200: control file could not be created
ORA-00202: control file: '+data'
ORA-17502: ksfdcre:4 Failed to create file +data
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-15055: unable to connect to ASM instance
ORA-01031: insufficient privileges
RMAN>
Please help.\
Thanks
Solaimanroot@drpdb1 []# id oracle
uid=108(oracle) gid=700(oinstall) groups=701(dba)
root@drpdb1 []# id grid
uid=109(grid) gid=700(oinstall) groups=701(dba),702(asmdba)
Edited by: 876149 on Jan 16, 2013 3:19 AM -
Data guard: OMF and directory structure
hi everybody,
i guess this problem is not too complicated, but maybe i´m missing something.
assumption:
- 10.2.0.4
- data guard with physical standby (primary: node_a, standby: node_b)
- primary db_unique_name=primary, standby db_unique_name=standby
- using OMF, primary db_create_file_dest=<myDir>/oradata
- standby_file_management is set to AUTO
- want to use same directory structure for data files on both nodes (<myDir>/oradata/PRIMARY/)
data guard is working as expected so far, data files on both nodes are created in <myDir>/oradata/PRIMARY/datafile/
next assumption:
- failover is initiated
- physical standby is recreated on former primary node (node_a)
from now on, data files are created in <myDir>/oradata/STANDBY/datafile/, old data files remain in <myDir>/oradata/PRIMARY/datafile/.
is there a way to avoid a second directory (and still use the benefits of OMF)? at least at the current standby node it´s possible to avoid this by setting db_file_name_convert, but what about the new primary?
thanks for your input,
peter
Edited by: priffert on Sep 14, 2009 3:07 AMI have a similar setup with the exception that I'm using ASM for datafiles. The issue I'm having with OMF is that if I create a datafile within a disk group that is not in the location specified by db_create_file_dest then on the standby it's created in the db_create_file_dest. Apparently this will not give me the ability to maintain the exact configuration on both primary and standby without requiring modification after role changes.
-
Directory structure for the new Data Services Project
1) I do as prescribed in the manual "Building and deploying
Flex 2 Applications", page 325
"To create a web application make a copy of the /flex
directory and its contents. Rename the copy and store it in the
same location under /servers/default directory."
("flex" is an empty Flex Data Services application that
serves as a template for creating your custom application)
2) I create a corresponding project from Flex Builder 2 :
Project type: Flex Data Services
Root folder: C:\fds2\jrun4\servers\default\MyDS
Root URL:
http://localhost:8700/default/MyDS
Project name: MyDS
Project contents: C:\fds2\jrun4\servers\default\MyDS
2) I build the project
Immediately after "build project" the directory structure at
C:\fds2\jrun4\servers\default\MyDS becomes the following:
.settings
bin
----------------META-INF
----------------WEB-INF
---------------- --------------- classes
---------------- ---------------flex
--------------------------------jsp
--------------------------------lib
-------------------------------sessions
html-template
META-INF
WEB-INF
----------------classes
----------------flex
----------------jsp
----------------lib
----------------sessions
Notice that bin directory now contains another pair of
META-INF and WEB-INF in addition to those already existing in the
template project "flex".
Can anybody comment on this directory structure?
Which META-INF and WEB-INF are supposed to be used for
configuration?
What is the purpose of having two pairs of META-INF and
WEB-INF in the same web app?Hello -
first, those folders are necessary in deployment - You need
only the contents of the bin folder for deployment, not the
sources. Since you're compiling the application locally in FB2 it
places all of the supporting and necessary files into one location
namely the "bin" folder. You'd deploy the "bin" folder's contents
to the FDS server, perhaps another FDS server that is not your
"development" server -- like a production server. The data and
configuration information that your app needs for FDS services are
stored in the WEB-INF and META-INF folders so these need to travel
with the final product. On the production server you'd just cop the
"bin" folder and it's contents to the /servers/default folder -
where you could then rename your bin folder to "MyDS"
HTH, Bill -
Location of log directory with respect to data guard
I am working with Oracle 10g in Linux platform
I was during a switchover from primary to standby which failed due to the incorrect settings of log_file_name_covert parameter . To debug this case I was going through Oracle Documentation material for Data Guard ....http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/troubleshooting.htm
There I came across this para
When the switchover to change the role from primary to physical standby was
initiated, a trace file was written in the log directory. This trace file contains the
SQL statements required to re-create the original primary control file. Locate the
trace file and extract the SQL statements into a temporary file. Execute the
temporary file from SQL*Plus. This will revert the new standby database back to
the primary role.
I was not able to understand what the trace file and log directory means ? Which is the location of log directory ?
Can anyone please explain ?It was talking about a trace will be placed under your background dump destination. As you would have using
alter database backup controlfile to trace;Check your background dump destination where your alert log is. -
Changing directory structure & song file locations: iTunes query
I have an external hard-drive which I have loaded with thousands of songs over the years but have never really used it. Now, I want to organise my iTunes songs but my limited knowledge has brought me to a stop... Please help!
On the external HDD, songs are located all over the place - the directory structure is bad - and I want to simplify it under a new structure. Some songs are also on my iMac's HDD which I also want to centralise on the external HDD. What is the best way to do this?
Also, I don't have any play lists organised so I don't mind if I loose the library information and start again UNLESS the library information also includes SONG information (artist, title, etc.). Could it be a good idea, then, that I simply start again? Specifically:
1. Can I just use Finder to move files around and change the directory structure on the external HDD?
2. Can I start from scratch and copy all the music files onto a new external HDD with the new, simple directory structure I want and then re-point iTunes using the "iTunes media folder location" tab? Will I lose all the song file information?
3. What do I do with the iTunes library? Is it best to "reset" the library somehow or should I do all of this moving using iTunes and therefore avoiding dealing with the library file?
Thanks!!!Driggs wrote:
1. Can I just use Finder to move files around and change the directory structure on the external HDD?
you could use finder, but it would be far better if you'd let iTunes handle the moving and organizing.
2. Can I start from scratch and copy all the music files onto a new external HDD with the new, simple directory structure I want and then re-point iTunes using the "iTunes media folder location" tab? Will I lose all the song file information?
if you want to start from scratch, i would leave the files where they currently are. again, let iTunes handle it (see below).
3. What do I do with the iTunes library? Is it best to "reset" the library somehow or should I do all of this moving using iTunes and therefore avoiding dealing with the library file?
i would start over. launch iTunes with the option key pressed. click on +create library+. in iTunes, go preferences > advanced and point iTunes to the desired location (i.e. your external drive). in the same tab, disable +copy files to iTunes media folder when adding to library+ temporarily.
now start adding content (file > add to library). when done, go back to preferences > advanced and re-enable +copy files to iTunes media folder when adding to library+. next, go file > library > organize library and select both options like so
hit ok. iTunes will now move files as necessary and organize them. when finished, your iTunes music folder on the external drive will look like this
none of your music files will be renamed. artwork you added manually will be retained, however, artwork that came with purchases from the iTunes store may be lost. that can easily be added later on though.
post back if you need further advice.
good luck ! -
How to retrive dynamic directory structure from data base and show it
i want to display a directory structur on jsp my problem is that i want to read it from a database how i can read
a tree structure i found a function to read directry structur but if i use this function i am not able to find who the parent is ? this is the function i found cna any one help me ?
public void iterate(File someDirectory) {
Vector allFiles = new Vector();
File[] dirContents = someDirectory.listFiles();
for (File f : dirContents) {
allFiles.add(f);
for (int i = 0; i < allFiles.size(); i++) {
File file = (File) allFiles.get(i);
if (file.isDirectory()) {
dirContents = file.listFiles();
for (File f : dirContents) {
allFiles.add(f);
// Process 'file' or do whatever
}Thanks for reply i am not actualy deling with files i am creating a web application in which i am having organize functionality
user creat folders and put there document in that like out look user can creat folder inside folder
i am storing tham in a database table my problem is how to retrive them so i can show them as a tree structur
i am using this code to show on jsp
{color:#808000}<table>
<tr>
<td >
<ul class="flipMenu">
<li><span class="menu">Folder</span>
<ul>
<c:forEach var="folderList1" items="${folderList}" >
{color}
{color:#808000}<li ><a class="submenu25" id='<c:out value="${folderList1.id}"/>' href="folderAgreement.htm?folderid=<c:out value="${folderList1.id}"/>" onmouseup="a('<c:out value="${folderList1.id}"/>');" ><span ><c:out value="${folderList1.name}" /> T</span></a>
<ul>
<li> <a href="# onclick="window.open('createSubFolder.htm?folderid=<c:out value="${folderList1.id"/>&agreetitle=<c:out value="${agreements.title}"/>', 'invite_wnd', 'height=700, width=800, status=no, menubar=no, toolbar=nu, location=400'); return false;">Add Folder
</li>
<li> <a href="folderDetail.htm?folderid=<c:out value=${folderList1.id"/>&flag=folder"><span class="submenu2">Detail</span>
</li>
<li> <a id="<c:out value="${folderList1.id}"/>" href="folderDelete.htm?folderid=<c:out value="${folderList1.id}"/>&flag=folder" onclick="conformDelete('<c:out value="${folderList1.id}"/>')" ><span class="submenu2">Delete Folder</span></a>
</li>
</ul>
</li>
</c:forEach>
{color}
{color:#808000}</ul>
</li>
</ul>
</td>
</tr>
</table>{color} -
Creating a Standby Database with the Same Directory Structure
Hello gurus,
I am self-learning the feature Oracle Data Guard, so I am testing it on Oracle 10g R2.
At Oracle documentation there is a section F.4.: Creating a Standby Database with the Same Directory Structure*, that explains how to create a standby database with RMAN but there is something that I don´t understand:
In the standby server, I created a database with the SID equal to production database* with the objetive to have the same directory structure, but when I try to startup nomount the standby database with pfile appear this expected error:
ORA-16187: LOG_ARCHIVE_CONFIG contains duplicate, conflicting or invalid attributes
So my question is: Is possible have the Same Directory Structure on both: Production and StandBy server?
Thanks in advanced.Uwe and mseberg: thanks for your quick answers
I have a doubt: How can you have the same directory structure if you have differents SIDs?, for example if you follow the OFA suggestions you would must have:
On Production server: */u01/app/oracle/oradata/PRIMARY/system.dbf*
On StandBy server: */u01/app/oracle/oradata/STANDBY/system.dbf*
Or you created the directory structure manually on StandBy server? For example replacing the string STANDBY* to PRIMARY* before create the database using dbca.
Do you understand my doubt? Excuse me for my english.
Thanks. -
Clarification on Data Guard(Physical Standyb db)
Hi guys,
I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
However I need clarification on the setup and whether or not it is working as expected.
My environment is Windows 32bit (Windows 2003)
Oracle 10.2.0.2 (Client/Server)
2 Physical machines
Here is what I have done.
Machine 1
1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
2. Modify the pfile to include the following:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgp'
*.fal_server='oradgs'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
*.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgp
The locations on the harddisk are all available and archived redo are created (e:\archlogs)
3. I then add the necessary (4) standby logs on primary.
4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
RMAN> run
{allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
backup database plus archivelog delete input;
5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
6. Then created a standby controlfile. (At this time the db was in open/write mode).
7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
Machine2
8. I created an Oracle service called the same as primary (oradgp).
9. Created a listener also.
9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
10. I then copied over the pfile from the primary to standby and created an spfile with this one.
It looks like this:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgs'
*.fal_server='oradgp'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
*.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgs
log_file_name_convert='junk','junk'
11. User RMAN to restore the db as:-
RMAN> startup mount;
RMAN> restore database;
Then RMAN created the datafiles.
12. I then added the same number (4) of standby redo logs to machine2.
13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
I wanted to enable realtime apply so, I cancelled the recover by :-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
and issued:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
Also performed a log swith on primary and it got transported to the standby and was applied (YES).
Also ensured that there are no gaps via some queries where no rows were returned.
15. I now wanted to perform a switchover, hence issued:-
Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
All the archivers stopped as expected.
16. Now on machine2:
Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
17. On machine1:
Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
Primary_Now_Standby_SQL>STARTUP MOUNT;
Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
17. On machine2:
Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
However, here are my questions for clarifications:-
Q1. There is a question about ONLINE REDO LOGS within "#" characters.
Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
MRP0 APPLYING_LOG 1 47 452 1024000
but :
SQL> select max(sequence#) from v$archived_log;
46
Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
42 NO
43 YES
44 YES
45 YES
46 YES
What could be the possible reasons why sequence# 42 didn't get applied but the others did?
After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
Thank you very much in advance.
Regards,
Bharath
Edited by: Bharath3 on Jan 22, 2010 2:13 AMParameters:
Missing on the Primary:
DB_UNIQUE_NAME=oradgp
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
Missing on the Standby:
DB_UNIQUE_NAME=oradgs
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
Your questions (Q1 answered above):
You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Up to you. Not a requirement.
You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary. -
Error: ORA-16525: the Data Guard broker is not yet available
Hi ,
After upgrading from 11201 to 11203 ON AIX GI/RDBMS on standby but have not upgraded the primary db yet.I had set dg_broker_start=false and disable configuration before i started the upgrade .
once the GI for oracle restart was upgraded i upgraded the rdbms binaries and brought up the standby on mount ,while trying to enable configuration its throwing the below error.I had already started the broker process.
SQL> show parameter dg_
NAME TYPE VALUE
dg_broker_config_file1 string /u01/app/omvmxp1/product/11.2.
0/dbhome_2/dbs/dr1mvmxs2.dat
dg_broker_config_file2 string /u01/app/omvmxp1/product/11.2.
0/dbhome_2/dbs/dr2mvmxs2.dat
dg_broker_start boolean TRUE
DGMGRL> show configuration;
Configuration - Matrxrep_brkr
Protection Mode: MaxAvailability
Databases:
mvmxp2 - Primary database
mvmxs2 - Physical standby database
Error: ORA-16525: the Data Guard broker is not yet available
Fast-Start Failover: DISABLED
Configuration Status:
ERROR
from drcmvmxs2.log
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
12/19/2012 16:05:33
Data Guard Broker shutting down
DMON Process Shutdown <<12/19/2012 16:10:20
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
~
Regards
Edited by: Monto on Dec 19, 2012 1:23 PMHi,
I removed the configuration and removed the broker files from RAC primary(mvmxp2) and single instance standby(mvmxs2) and re-created back.i tried it many times but getting error "ORA-16532" .I needed to have this standby backup before i start upgrading the primary.
SQL> alter system set dg_broker_start=true scope=both;
System altered.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
palmer60:/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs>dgmgrl
DGMGRL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@mvmxp2
Password:
Connected.
DGMGRL> CREATE CONFIGURATION 'Matrxrep'
AS
PRIMARY DATABASE IS 'mvmxp2'
CONNECT IDENTIFIER IS 'mvmxp2';> > >
Configuration "Matrxrep" created with primary database "mvmxp2"
DGMGRL> ADD DATABASE 'mvmxs2'
AS
CONNECT IDENTIFIER IS 'mvmxs2'
;Database "mvmxs2" added
DGMGRL> SHOW CONFIGURATION;
Configuration - Matrxrep
Protection Mode: MaxPerformance
Databases:
mvmxp2 - Primary database
mvmxs2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
DISABLED
DGMGRL> ENABLE CONFIGURATION;
Enabled.
DGMGRL> SHOW DATABASE MVMXS2;
Database - mvmxs2
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
mvmxs2
Database Status:
DGM-17016: failed to retrieve status for database "mvmxs2"
ORA-16532: Data Guard broker configuration does not exist
ORA-16625: cannot reach database "mvmxs2"
DGMGRL>
tailed the drcmvmxs2.log during stop and start of the broker
palmer60:/u01/app/omvmxp1/diag/rdbms/mvmxs2/mvmxs2/trace>tail -f drcmvmxs2.log
12/19/2012 20:32:20
drcx: cannot open configuration file "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
12/19/2012 20:32:55
drcx: cannot open configuration file "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
12/19/2012 20:59:10
Data Guard Broker shutting down
DMON Process Shutdown <<12/19/2012 20:59:35
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
Not sure how to fix this one.
Regards -
11g Active Data Guard Software?
I got a copy of the 11g (for SUN SPARC) from the edelivery.com, and got the patch for 11.1.0.7 from metalink.
I upgraded 110.2.0.4 to 11.1.0.7. But, I could not get the ADG to work.
Here is what happened when I queried the database status.
--------------------------- begin ---------------------------------
SQL> alter database recover managed standby database cancel;
Database altered.
SQL> alter database open;
Database altered.
SQL> select status from v$instance
2 ;
STATUS
OPEN
SQL> select OPEN_MODE from v$database;
OPEN_MODE
READ ONLY
SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.
SQL> select OPEN_MODE from v$database;
OPEN_MODE
MOUNTED
SQL> select status from v$instance;
STATUS
MOUNTED
---------------------------- end ------------------------------------
Turning on the Redo Apply put the database mode back to 'MOUNTED'. For ADG, it should remain 'OPEN'. I suspect this copy of 11g does not have the ADG feature.
Is there a way I could check whether the ADG is installed? Is there a directory under $ORACLE_HOME or a script under $ORACLE_HOME/rdbms/admin that indicates the ADG is indeed installed?
Where did you guys get your software?
Thanks very much in advance.You should have the right software and the right option installations in Entreprise Edition (otherwise Data Guard would not work at all).
Did you check both alert log for possible error messages ?
I think you should open the database in read-only mode instead of read-write mode:
alter database open read only;See following OTN example: http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-dataguard.html
Edited by: P. Forstmann on Jul 9, 2009 9:29 PM
Edited by: P. Forstmann on Jul 9, 2009 9:34 PM -
Data Guard Administration Question.... (10gR2)
After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
Thanks:
ChrisSo what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
Here's the way things are setup:
I have 3 redo logs:
primary (non-ofa):
/opt/oracle10/product/oradata/ICCORE10G2/redo01.log
... redo02.log
... redo03.log
secondary (ofa):
/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
... redo02.log
... redo03.log
LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
Is the above parameter set correctly?
DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
Thanks -
Dear Gurus
I need to implemement data guard in sap.client is asking that on standby its required that sid be same as primary because sap uses it.
So is it possible to configure data guard with same sid's on primary and standby.
also as i keep sid same the directory structure would be same in that case like
on primary --E:\oracle\db\ppm
on standby- E:\oracle\db\ppm
so no need to use parameter db_file_name_convert and log_file_name_convert
so would it be a fine configuration of data guard
OS--Windows2008
Oracle 11guser11221081 wrote:
Dear Gurus
I need to implemement data guard in sap.client is asking that on standby its required that sid be same as primary because sap uses it.
So is it possible to configure data guard with same sid's on primary and standby.
also as i keep sid same the directory structure would be same in that case like
on primary --E:\oracle\db\ppm
on standby- E:\oracle\db\ppm
so no need to use parameter db_file_name_convert and log_file_name_convert
so would it be a fine configuration of data guard
OS--Windows2008
Oracle 11g
i already updated in earlier thread, see my post here Re: Data guard in sap -
All,
I have a call currently open with Oracle regarding the setting of the parameters db_file_name_convert and log_file_name_convert in a data guard environment. We use ASM / OMF for storage and file naming and my question is basically do these parameters have to be set. The documentation says they do where the file structure is different between PRIMARY and STANDBY.
I have successfully tested failover and switchover without these parameters. I have also added a new tablespace on the PRIMARY and watched it create a new OMF datafile on standby when the logs are switched.
I just can't see a reason for setting them when using ASM / OMF.
I'm hoping someone can enlighten me here because I'm getting nowhere whith support. The following is our Data Guard setup:
PRIMARY
DB_NAME=IBSLIVE
DB_UNIQUE_NAME=IBSLIVE
ASM Disk Groups:
+PRODDATA (Data Files, Control Files, Redo Logs)
+PRODFLASH (Archive Logs, Flashback Logs, RMAN backups)
+PRODLOGS (Multiplexed Control & Redo Logs)
STANDBY
DB_NAME=IBSLIVE
DB_UNIQUE_NAME=IBSDR
ASM Disk Groups:
+DRDATA (Data Files, Control Files, Redo Logs)
+DRFLASH (Archive Logs, Flashback Logs, RMAN backups)
+DRREDO (Multiplexed Control & Redo Logs)
Many Thanks,
Ian.Ian,
I'm having similar thoughts.
I have created a new instance with files in asm under +datadisk/obosact (this is the smae name as primary)
I then modify the db_unique_name from obosact to obosactdr as is required for standby to work
When I recover (duplicate target database for standby; ) I find that the files are in datadisk/obosactdr not in the datadisk/obosact area
I found this reference http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimaryRACPhysicalStandby.pdf
4. Connect to the ASM instance on one standby host, and create a directory within the DATA disk group that has the same name as the DB_UNIQUE_NAME of the standby database. For example: SQL> ALTER DISKGROUP data ADD DIRECTORY '+DATA/BOSTON';
This step seems to indicate that the location of the files is determined by the db_unique_name not the db_file_name_convert paramenter
DId you ever resolve the issue? -
Hi,
With the help of many good posts on this forum I got 11.5.10.2 Apps running on Suse Linux 9.0. I installed Vision Demo database and the installation location is
/opt/oracle
under this directory following directories have been created by rapidinstall
visappl
viscomn
visdata
visdb
visora
Could someone please educate me about this directory structure.
Also where should my environment variables like ORACLE_BASE and ORACLE_HOME point, which environment files should I put these env variables so that these get applied after reboot automatically.
Thanks in advance.
Ravi Singh
Message was edited by:
Ravi-2006Hi Ravi
Let me answer to your query...
What all i understand you want to know the file structure, as you have installed vission DB thus
visappl - APPL_TOP
viscomn - COMMON_TOP
visdata - data_top (data top for appl utilies 806 home)
visdb - database
visora - ora_top
There are 2 oracle home
1) 9i oracle_home which I feel must be -- visdb
and
2) 806 oracle home which is visdata
8.0.6 oracle home is ussed for apps utlities adpatch/adclone etc
when you source different environment file you then see the difference
FYI
appl_top env file --> $APPL_TOP APPS<SID>_server.env
let me know if you still have doubt
[email protected]
+9810285180
Maybe you are looking for
-
A tela da Configuração do Grupo de item aparece fora da tela de acesso
Grupo, Gostaria de ajuda, o servidor do meu cliente caiu e estava usando o cadastramento do grupo de item, agora quando tento abrir ele pararece no alto sem poder ser inserido os cadastros, já tentei de todas as formas baixar a tela mais não tive suc
-
UDB does not support bit data types to my knowledge... what is a recommended way to map a boolean value to a udb database?
-
1. SVG When i add SVG file 385x225 px and set position x1y1 it looks like Image 385x225 but transform scale = 10% In index.edge.js it's look like: id: 'graph', type: 'image', rect: ['-1732px', '-1012px'
-
While doing some research, i came across some nice Adobe softwares like Adobe Muse and Adobe Edge Animate. Which one would you recommend me to learn and master? I have also been looking at Adobe After Effects and it seems to be very nice too. I know
-
IPCC 20999 Error while dailing out from CTIOS
We are getting the following error while dailing out from the CTIOS softphone dail pad(Happening for one agent though). We are on Callmanger 6X and CTIOS 7.2X. Please see the sceenprint attached for the error message. Appreciate any help on this.