Large import on dataguard (logical standby) 11.2
The action I'll like to perform is to do a large import on the production enviroment without that this import will be applied on the logical-standby environment (because its to large for the standby envoriment to keep it up) and after that I will do this same import on the logical standby.
What the most common way to do this large import on logical standby enviroment after this is performed on the produnction ?
something like ;
alter session disable guard;
do the import
alter session enable guard;
alter database start logical standby apply skip failed transaction;
thanks in adv
if the table you are importing will have changes made to it AFTER the import and you want those changes replicated to the logical standby then you have to actually re-sync the table FROM primary.
Step 1. Skip the table DML on logical
EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'DML', schema_name => 'schema', object_name =>'table_name');
Step 2. Import the table into primary
Step 3. Stop Logical Standby apply
alter database recover managed standby database cancel;
Step 4. Get the restart_scn on Logical Standby to use for the flashback scn with exp_dp
SELECT RESTART_SCN FROM V$LOGSTDBY_PROGRESS;
Step 5. On Primary export the table using the restart_scn from logical
expdp DIRECTORY=DP_DIR DUMPFILE=sync_table.dmp LOGFILE=sync_table.log tables=schema.table_name flashback_scn=6146375299983
Step 6. On Logical import the table
impdp DIRECTORY=DP_DIR dumpfile=sync_table.dmp TABLE_EXISTS_ACTION=REPLACE
Step 7. Unskip the table and start the apply on logical standby
EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', schema_name => 'schema', object_name =>'table_name');
alter database start logical standby apply immediate;
Now the tables should both be synced up and any new changes made to the table on primary will replicate correctly to logical.
Note that oracle must maintain a read consistent copy of the table as of the SCN you specified, depending on how long it takes to export you may run into rollback segment issues. I'd plan on doing this during the least busy time on your db and increase undo to avoid ORA-01555
Similar Messages
-
Large import on dataguard (Physical Standby 11.2 )
Hi experts,
I have to import a great database (50 Gigabytes) (with impdp) which should be configured with a standby database.
My Question:
Should I make the import before or after the configuration of the standby database?Thanks & regards
hqt200475
Edited by: hqt200475 on Sep 7, 2011 8:55 AMDo the IMPORT first.
If you don't you risk messing up the FRA on both the Primary and the Standby side. The redo will stop applying the second the FRA hits the limit on the Standby or worse yet a day or two later when you think you are in the clear. Even if for some reason you don't use FRA shipping all that redo isn't a good idea on a new Data Guard system. You might have another issue and you just compound it by doing an import of this size. On most systems you would be adding an additional 500 plus redo logs to the mix.
Also you can duplicate the Standby database using RMAN after the import very easily.
I probably wait a least a few days after I imported on the future Primary before I built the Standby.
I have a solid duplicate document and will share if you decided to go that route.
Best Regards
mseberg -
Oracle DataGuard Logical Standby
Hi , I'd like to know what are the premisses to make a standby logical replication.
I have two environments (production and standby) with different number of processors and storage.
I'd like to know if it's possible to create data guard solution with these two environments and what are the premisses that I need aware to install the solution.
Thanks in advance.Hello;
Yes the storage ( as long as you have enough space ) and cpu don't matter.
You can use hardware that's quite a bit different and it will work fine.
You can also create a Logical Standby fro a physical standby.
These notes may help :
NOTE 413484.1 - Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration
Mixed Oracle Version support with Data Guard Redo Transport Services [ID 785347.1]
Logical Standby's don't seem to be very popular and the support base suffers because of this. I would consider a physical standby with Active Data Guard if costs allow.
Active Data Guard makes an excellent "Reader" database plus I believe your switchover and failover options are much better than a Logical Standby.
Oracle DataGuard Logical Standby Setup :
http://orajourn.blogspot.com/2007/02/setting-up-logical-standby-database.html
http://garyzhu.net/notes/DataGuard.html
Best Regards
mseberg
Edited by: mseberg on Apr 25, 2012 10:07 AM -
How to import schema into logical standby database
i had created a new schema(A)in different database and exported the schema into a dmp file using export utility
from the exported dmp file i want to import that schema(A) into logical standby database using import utility
iam importing as system user .
i am getting the following error
IMP-00003: ORACLE error 1031 encountered
ORA-01031: insufficient privileges
can some one help what was going wrong.
thank you in advance.
kalyanHi kalyan,
May u plz let us know which import command you are using? kinldy check that your system user has import/export privileges?
Thanks
Hassan Khan -
Dataguard logical standby issue on lacking
we have logical standby database and its has been lacking sql apply from 14 hrs from primary database. Is there anyway i can stop sql apply and recover the database through archive files and make sync with primary and then again start the sql apply on logical standby database.
Please adivceNeon,
I won't respond any further. The Dataguard Concepts and Administration Manual does contain a section on Troubleshooting.
It is quite clear you are using technology on which you have insufficient knowledge.
However, you belong to the class of wannabe-DBAs here,who as soon as they run into trouble start posting doc questions labeled with
'I need the detail steps'
'Urgent'
or (new development) post
'Any help' every half hour, when a volunteer didn't respond to their question.
If you want a quick response, submit a SR at Metalink. At least those analists are getting paid to answer your doc questions.
Sybrand Bakker
Senior Oracle DBA -
Dataguard - logical standby - need help - not updating data at commit
Hi
Need help ?
We have logical standby setup where data is not updated at standby when committed at Primary. It only updates when alter system swith logfile.
On Primary:
log_archive_dest_2='service=xxStandby LGWR ASYNC valid_for=(online_logfiles,primary_role) db_unique_name=xxStandby'
On Standby:
we have already created standby redo logs ( 1+ # redo logs)
we used ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE NODELAY;
The data only ships when we do the ALTER SYSTEM SWITCH LOGFILE whereas it should so when data is commited on the primary.
Is there anything we are missing or need attention.
Please help
Thanks you so muchYes, I have standby redo logs = ( 1 + # redo logs on the primary) and are of same size.
I also changed the log_archive_dest_2 setting for valid_for to ALL_LOGFILES, but the performance is still very slow... we are now overr 9 hrs behind the primary.
log_archive_dest_2='service=xxStandby LGWR ASYNC valid_for=(ALL_LOGFILES,primary_role) db_unique_name=xxStandby'
The Logical standby performance reallly not good at all. trying to increase sga/pga more ... hoping this will speed up some
Paul -
Question: We're just thinking here.
If I have a primary database (A), which has a logical standby database (B), is it possible to have a logical standby database (C) for database (B)?
We have users that are updating tables in the logical standby database (B) but these are different tables than what are being updated from primary database (A).
Thank youYes.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/cascade_appx.htm#i636960 -
Logical standby apply won't apply logs
RDBMS Version: Oracle 10.2.0.2
Operating System and Version: Red Hat Enterprise Linux ES release 4
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.): Oracle Dataguard (Logical Standby)
Product Version:
Hi!!
I have problem logical standby apply won't apply logs.
SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN
STATUS
COORDINATOR 288810
ORA-16116: no work available
READER 288810
ORA-16240: Waiting for logfile (thread# 1, sequence# 68)
BUILDER 288805
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
PREPARER 288804
ORA-16116: no work available
ANALYZER 288805
ORA-16116: no work available
APPLIER 288805
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
APPLIER
ORA-16116: no work available
APPLIER
ORA-16116: no work available
APPLIER
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
APPLIER
ORA-16116: no work available
10 rows selected.
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIM NEXT_TIME DIC DIC
66 11-JAN-07 11-JAN-07 YES YES
67 11-JAN-07 11-JAN-07 NO NO
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
NAME
VALUE
coordinator state
IDLE
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
288803 288809
INITPRIMARY.ORA
DB_NAME=primary
DB_UNIQUE_NAME=primary
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
service_names=primary
instance_name=primary
UNDO_RETENTION=3600
LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/home/oracle/primary/arch1/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_2=
'SERVICE=standy LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_3=
'LOCATION=/home/oracle/primary/arch2/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=standy
FAL_CLIENT=primary
DB_FILE_NAME_CONVERT='standy','primary'
LOG_FILE_NAME_CONVERT=
'/home/oracle/standy/oradata','home/oracle/primary/oradata'
STANDBY_FILE_MANAGEMENT=AUTO
INITSTANDY.ORA
db_name='standy'
DB_UNIQUE_NAME='standy'
REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
SERVICE_NAMES='standy'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
DB_FILE_NAME_CONVERT='/home/oracle/primary/oradata','/home/oracle/standy/oradata'
LOG_FILE_NAME_CONVERT=
'/home/oracle/primary/oradata','/home/oracle/standy/oradata'
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=
'LOCATION=/home/oracle/standy/arc/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_2=
'SERVICE=primary LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_3=
'LOCATION=/home/oracle/standy/arch2/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=primary
FAL_CLIENT=standy
Alert Log Banco "Standy" desde a inicialização do SQL Apply
Thu Jan 11 15:00:54 2007
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
Thu Jan 11 15:01:00 2007
alter database add supplemental log data (primary key, unique index) columns
Thu Jan 11 15:01:00 2007
SUPLOG: Updated supplemental logging attributes at scn = 289537
SUPLOG: minimal = ON, primary key = ON
SUPLOG: unique = ON, foreign key = OFF, all column = OFF
Completed: alter database add supplemental log data (primary key, unique index) columns
LOGSTDBY: Unable to register recovery logfiles, will resend
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
ALTER DATABASE START LOGICAL STANDBY APPLY (standy)
with optional part
IMMEDIATE
Attempt to start background Logical Standby process
LSP0 started with pid=21, OS id=12165
Thu Jan 11 15:01:05 2007
LOGSTDBY Parameter: DISABLE_APPLY_DELAY =
LOGSTDBY Parameter: REAL_TIME =
Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
Thu Jan 11 15:01:07 2007
LOGSTDBY status: ORA-16111: log mining and apply setting up
Thu Jan 11 15:01:07 2007
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
LOGMINER: session# = 1, reader process P000 started with pid=22 OS id=12167
LOGMINER: session# = 1, builder process P001 started with pid=23 OS id=12169
LOGMINER: session# = 1, preparer process P002 started with pid=24 OS id=12171
Thu Jan 11 15:01:17 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:01:17 2007
LOGMINER: Turning ON Log Auto Delete
Thu Jan 11 15:01:26 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:01:26 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:01:26 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ATTRCOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CCOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CDEF$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COLTYPE$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ICOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_IND$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDCOMPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDSUBPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOB$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOBFRAG$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_OBJ$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TAB$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABCOMPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABSUBPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TS$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TYPE$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_USER$ have been marked unusable
Thu Jan 11 15:02:05 2007
Indexes of table SYSTEM.LOGMNR_ATTRCOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_ATTRIBUTE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_CCOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_CDEF$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_COL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_COLTYPE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_DICTIONARY$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_ICOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_IND$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDCOMPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDSUBPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_LOB$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_LOBFRAG$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_OBJ$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TAB$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABCOMPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABSUBPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TS$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TYPE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_USER$ have been rebuilt and are now usable
LSP2 started with pid=25, OS id=12180
LOGSTDBY Analyzer process P003 started with pid=26 OS id=12182
LOGSTDBY Apply process P008 started with pid=20 OS id=12192
LOGSTDBY Apply process P007 started with pid=30 OS id=12190
LOGSTDBY Apply process P005 started with pid=28 OS id=12186
LOGSTDBY Apply process P006 started with pid=29 OS id=12188
LOGSTDBY Apply process P004 started with pid=27 OS id=12184
Thu Jan 11 15:02:48 2007
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 12194
RFS[1]: Identified database type as 'logical standby'
Thu Jan 11 15:02:48 2007
RFS LogMiner: Client enabled and ready for notification
Thu Jan 11 15:02:49 2007
RFS LogMiner: RFS id [12194] assigned as thread [1] PING handler
Thu Jan 11 15:02:49 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:02:49 2007
LOGMINER: Turning ON Log Auto Delete
Thu Jan 11 15:02:51 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:02:51 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:02:51 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Please, help me more time!!!!
Thanks.Hello!
thank you for the reply.
The archive 1_68_608031954.arc that error of reading occurred, did not exist in the date of the error sees below:
$ ls -lh /home/oracle/standy/arch2/
total 108M
-rw-r----- 1 oracle oinstall 278K Jan 11 15:00 1_59_608031954.arc
-rw-r----- 1 oracle oinstall 76K Jan 11 15:00 1_60_608031954.arc
-rw-r----- 1 oracle oinstall 110K Jan 11 15:00 1_61_608031954.arc
-rw-r----- 1 oracle oinstall 1.0K Jan 11 15:00 1_62_608031954.arc
-rw-r----- 1 oracle oinstall 2.0K Jan 11 15:00 1_63_608031954.arc
-rw-r----- 1 oracle oinstall 96K Jan 11 15:00 1_64_608031954.arc
-rw-r----- 1 oracle oinstall 42K Jan 11 15:00 1_65_608031954.arc
-rw-r----- 1 oracle oinstall 96M Jan 13 06:10 1_68_608031954.arc
-rw-r----- 1 oracle oinstall 12M Jan 13 13:29 1_69_608031954.arc
$ ls -lh /home/oracle/primary/arch1/
total 112M
-rw-r----- 1 oracle oinstall 278K Jan 11 14:21 1_59_608031954.arc
-rw-r----- 1 oracle oinstall 76K Jan 11 14:33 1_60_608031954.arc
-rw-r----- 1 oracle oinstall 110K Jan 11 14:46 1_61_608031954.arc
-rw-r----- 1 oracle oinstall 1.0K Jan 11 14:46 1_62_608031954.arc
-rw-r----- 1 oracle oinstall 2.0K Jan 11 14:46 1_63_608031954.arc
-rw-r----- 1 oracle oinstall 96K Jan 11 14:55 1_64_608031954.arc
-rw-r----- 1 oracle oinstall 42K Jan 11 14:55 1_65_608031954.arc
-rw-r----- 1 oracle oinstall 4.2M Jan 11 14:56 1_66_608031954.arc
-rw-r----- 1 oracle oinstall 5.5K Jan 11 14:56 1_67_608031954.arc
-rw-r----- 1 oracle oinstall 96M Jan 13 06:09 1_68_608031954.arc
-rw-r----- 1 oracle oinstall 12M Jan 13 13:28 1_69_608031954.arc
Alert log
hu Jan 11 15:01:00 2007
SUPLOG: Updated supplemental logging attributes at scn = 289537
SUPLOG: minimal = ON, primary key = ON
SUPLOG: unique = ON, foreign key = OFF, all column = OFF
Completed: alter database add supplemental log data (primary key, unique index) columns
LOGSTDBY: Unable to register recovery logfiles, will resend
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
You it would know as to help me?
Would be a BUG of the Oracle 10g?
Thanks. -
Transportable tablespaces with Logical Standby
Does anyone know whether or not transportable tablespaces can be used in a Logical Standby environment? I know they can be used with a Physical Standby, and the documentation covers how to do this, but there's nothing about Logical Standby. This is Oracle 10.2.0.4/SLES 10.
Thanks.Using transportable tablespaces to a logical standby environment. Actually, there is a paragraph in the 10.2 Data Guard Concepts and Administration manual. I will summarize:
After creating the transportable tablespace export file on the source system, copy the datafiles to both the primary and logical standby systems.
On the dataguard logical standby, within SQLplus, issue SQL> alter session disable guard;
Import the tablespaces into the logical standby database
SQL> alter session enable guard;
Import the tablespaces into the primary database.
In 10.1, we imported the tablespaces into an environment with a physical standby and then converted the physical standby to a logical standby.
I tested the procedure for 10.2 and it seemed to work. Has anyone else done this and if so, are you aware of any issues? Like I said, the documentation is one very brief paragraph.
Thanks. -
Logical Standby Dataguard experience. Robust? Reliable?
DBAs,
Need sharing from all of your experience using logical standby Dataguard in 10gR2 or 11gR1. Do you feel that this logical standby technology requires high operational overhead (e.g. lots of problems, bugs, not so robust, error prone, etc). Transaction load is rather high for my environment.
I only use physical standby so far.
Thanks in advanceHi - first you need to know the purpose of requiring a logical standby.
If it's for active standby - you might consider active dataguard in 11g.
Also you have to go through the doc for the list of limitation/s (regarding db objects) on logical standby - there's quite a few of them.
A well maintained/monitored system will have no additional admin overhead (maybe not initially) than the orhers.
cheers.
Lovell. -
When using DataGuard, the logical standby DB is already "OPEN"?
When I was setting up the physical standby DB several years ago no DataGuard got involved), I had the impression that the standby DB is in MOUNT status.
But since when using DataGuard to set up the logical standby DB, the logical standby DB can be used as it is already OPENed.
Is it actually in OPEN status? If so, can the primary and logical standby have the same SID without any trouble?
Please help advise. Thanks a lot!Hello Christy,
yes it is ok that both database are in status OPEN, that is also the main reason for using logical standby databases (you can do changes and queries on standby too).
I hope you don't use a logical standby database in a SAP environment, because of this is NOT supported. Check sapnote #105047 - Point "14. Data Guard". A logical standby database has limitations.
If you just want to use the physical standby database for queries you can open it in read only or you can use flashback technology for read/write. With Oracle 11g you have these features included in data guard itself (Active Data Guard / Snapshot StandBy) to perform such steps automatically.
Regards
Stefan -
Import a schema (tables, Views, Stored Procedures) on logical standby
Hi,
We have a logical standby for reporting purpose. The logical standby build through data guard
we need to import a new user in logical standby using import utility. The user dump contain tables, views, procedures, packages, roles).
The new user import has to go in users tablespace.
Is is possbile to import a new user in logical standby and what are the steps.
Thanks in advanceHi,
Can you give me more details about your envirnoment configuration, O/S, DB version.
But generally i don't think that is this possible becuase as you know standby only for cloning the primary database, so you can import it on the production then it will be transfered to the logical standby database.
Regards; -
Switchover logical standby dataguard
hi
we are using oracle9i on enterprise linux 3 . our primary and logical database are not located remotely but are two different servers.
now i need to know the steps that i need to perform in order to switch my primary database to standby database and vice versa . i tried few things but was unsuccessful.
according to my concept what we need to do is
Make sure no data is arriving and the applied and newest_scn are equal.
change the database role of the respective database .
make changes in initialization parameter file to change the primary dest as standby and vice versa .
and then start logical standby apply .
Is thier something that i am missing i tried few things from net but was not able to understand the logic and was unsuccessful .
THANKSOEM>Tools>Database Applications>Dataguard Manager.
Select primary>Disable
Select secondary>Enable -
Dataguard Problem(logical standby database)
Hi,
I have successfully created logical standby database, and everything is working fine, all of the SQL is applying and archiving is also shipping.
Until I create a new tablespace for e.g. pay in the primary database, and suddenly SQL applying is stopped, but the archive is shipping.
I am using Windows XP SP2 and Oracle 10gRel2.
The contents of AlertLog file are as
Wed Jul 23 22:52:19 2008
Thread 1 cannot allocate new log, sequence 133
Checkpoint not complete
Current log# 3 seq# 132 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO03.LOG
Wed Jul 23 22:52:23 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:52:23 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:52:23 2008
Thread 1 advanced to log sequence 133 (LGWR switch)
Current log# 1 seq# 133 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO01.LOG
Thread 1 cannot allocate new log, sequence 134
Checkpoint not complete
Current log# 1 seq# 133 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO01.LOG
Wed Jul 23 22:52:29 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:52:29 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:52:29 2008
Thread 1 advanced to log sequence 134 (LGWR switch)
Current log# 2 seq# 134 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO02.LOG
Wed Jul 23 22:55:49 2008
Thread 1 cannot allocate new log, sequence 135
Checkpoint not complete
Current log# 2 seq# 134 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO02.LOG
Wed Jul 23 22:55:54 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:55:54 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:55:54 2008
Thread 1 advanced to log sequence 135 (LGWR switch)
Current log# 3 seq# 135 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO03.LOG
When i use this command, the SQL Apply starts again but the tablespace is not created on Logical standby database.
kindly give me a solution.
Thanks in advanced.In Standy database also you need to add TB details for it to recognise Primary DB new TB .
try adding it and retry your operation . -
Logical Standby working issues Oracle 9i, Windows
Hi,
Set up Oracle 9i Logical Standby on Windows. (instructions as per Oracle Documentation)
Did not have any issues setting up.
While setting up the Logical Standby, Recovered the Primary Database until Oct 10/09 8:16 pm
Registered the archive log in the logical standby generated hence and the FAL took care of copying/registering the rest of the archivelogs.
Created and inserted some records in Primary database and could see them in Standby.
So far so good.
On Oct11 data was entered into Primary database. Archivelogs were shipped to Standby, I could see them registered in DBA_LOGSTDBY_LOG.
The APPLIED_SCN,NEWEST_SCN were in sync as per DBA_LOGSTDBY_PROGRESS.
Today, we had some issues with data and when we queried the user tables: (no skip settings)
Couldn't see any data in standby past the recovery...
No errors reported in DBA_LOGSTDBY_EVENTS. No errors in Alert log also.
What could be happening?
Thanks,
MadhuriI figured it out...
Today, we had some issues with data and when we queried the user tables: (no skip settings)
Couldn't see any data in standby past the recovery...I was using two tables as random spot check and both did not get updated. So, I was under the impression SQL APPLY did not do anything.
But, it did apply the redo on the rest of the tables.
These 2 tables in question were skipped because both of them had Function Based indexes.
They are very huge individual tables .
So, exporting them from Primary database and Importing them into Standby Database. Skipping DML in DataGuard.
That solved the problem.
--Madhuri
Maybe you are looking for
-
My scan disk isnt formatted for my macbook pro, how can i open it?
Does anyone know how to open a windows formatted flash drive on macbook pro? Nothing I tried worked! It contains an important conference I attended. I am new to Mac & in need of skills Thank you in advance... Message was edited by: Phoenix_13
-
HI Experts, does enyone know the name of direct input, which transfer commodity code to SAP? I can't do it by batch input recording becouse i don't have direct transaction in spro to filling the commodity code. Thanks for reply Kasia.
-
Unable to convert ABAP Spool in Thai to a PDF.
Hi I an issue where if I try to download an ABAP List output (Spool) to a PDF using CONVERT_ABAPSPOOLJOB_2_PDF (Also checked with RSTXPDF4), I am getting junk characters in place of Thai characters in the PDF. Any pointer on this will be of great hel
-
Function group: to Check syntax errors
Hi, We are doing a project.In that we planned to do our program like this 1.To check errors line by line. 2. If my program come across any errors in statement it should automattically show the correct statement or
-
Hi all, I have Netweaver Developer Studio version 2.0.9 when I am working with Webdynpro applications developed using NWDS version 2.0.13 I get errors while running application. I tried using NWDS version 2.0.15 too but it also has its own set of err