Logical standby stops deleting archives automatically
We are ruining 11.1.0.7 on windows 2008 64 bit
The issue I am facing is we have these settings implemented
log_auto_delete = true
log_auto_delete_retention_target = 5
Logical standby works fine and deletes the archive logs coming from primary but after some day it stops working. I have to stop and start the database in order to make it work. Does anybody has this kind of situation? Please help.
Hello;
This might be worth a look:
Bug 13448652 - LOG_AUTO_DELETE setting reverts to FALSE on Logical Standby (Doc ID 13448652.8)
Best Regards
mseberg
Similar Messages
-
Logical standby stopped lastnight
Subject: Logical standby stopped lastnight
Author: raghavendra rao yella, United States
Date: Nov 14, 2007, 0 minutes ago
Os info: solaris 5.9
Oracle info: 10.2.0.3
Error info: ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
Message: Our logical standby stopped last night. we tried to stop and start the standby but no help.
Below are some of the queries to get the status:
APPLIED_SCN LATEST_SCN MINING_SCN RESTART_SCN
11962328446 11981014649 11961580453 11961536228
APPLIED_TIME LATEST_TIME MINING_TIME RESTART_TIME
07-11-13 09:09:41 07-11-14 10:26:26 07-11-13 08:57:53 07-11-13 08:56:36
sys@RP06>SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN STATUS
COORDINATOR 1.1962E+10 ORA-16116: no work available
READER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
BUILDER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
PREPARER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
ANALYZER 1.1962E+10 ORA-16120: dependencies being computed for transac
tion at SCN 0x0002.c8f6c002
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
10 rows selected.
Select PID,
TYPE,
STATUS
From
V$LOGSTDBY
Order by
HIGH_SCN; 2 3 4 5 6 7 8
PID TYPE STATUS
17896 ANALYZER ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
17892 PREPARER ORA-16127: stalled waiting for additional transactions to be
applied
17890 BUILDER ORA-16243: paging out 8144 bytes of memory to disk
17888 READER ORA-16127: stalled waiting for additional transactions to be
applied
28523 COORDINATOR ORA-16116: no work available
17904 APPLIER ORA-16116: no work available
17906 APPLIER ORA-16116: no work available
17898 APPLIER ORA-16116: no work available
17900 APPLIER ORA-16116: no work available
17902 APPLIER ORA-16116: no work available
10 rows selected.
How can i get this transaction information, which log miner is looking for dependencies?
Let me know if you have any questions.
Thanks in advance.
Message was edited by:
raghu559Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Logical standby stopped when trying to create partitions on primary(Urgent
RDBMS Version: 10.2.0.3
Operating System and Version: Solaris 5.9
Error Number (if applicable): ORA-1119
Product (i.e. SQL*Loader, Import, etc.): Data Guard on RAC
Product Version: 10.2.0.3
logical standby stopped when trying to create partitions on primary(Urgent)
Primary is a 2node RAC ON ASM, we implemented partitions on primar.
Logical standby stopped appling logs.
Below is the alert.log for logical stdby:
Current log# 4 seq# 860 mem# 0: +RT06_DATA/rt06/onlinelog/group_4.477.635601281
Current log# 4 seq# 860 mem# 1: +RECO/rt06/onlinelog/group_4.280.635601287
Fri Oct 19 10:41:34 2007
create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL
Fri Oct 19 10:41:34 2007
ORA-1119 signalled during: create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL...
LOGSTDBY status: ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
LOGSTDBY Apply process P004 pid=49 OS id=16403 stopped
Fri Oct 19 10:41:34 2007
Errors in file /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc:
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
Here is the trace file info:
/u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: SunOS
Node name: iscsv341.newbreed.com
Release: 5.9
Version: Generic_118558-28
Machine: sun4u
Instance name: RT06
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 16387, image: [email protected] (LSP0)
*** 2007-10-19 10:41:34.804
*** SERVICE NAME:(SYS$BACKGROUND) 2007-10-19 10:41:34.802
*** SESSION ID:(1614.205) 2007-10-19 10:41:34.802
knahcapplymain: encountered error=12801
*** 2007-10-19 10:41:34.804
ksedmp: internal or fatal error
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
KNACDMP: *******************************************************
KNACDMP: Dumping apply coordinator's context at 7fffd9e8
KNACDMP: Apply Engine # 0
KNACDMP: Apply Engine name
KNACDMP: Coordinator's Watermarks ------------------------------
KNACDMP: Apply High Watermark = 0x0000.0132b0bc
Sorry our primary database file structure is different from stdby, we used db_file_name_convert in the init.ora, it look like this:
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+OT06_DATA/OT06TSG001/','+RT06_DATA/RT06/','+RECO/OT06TSG001','+RECO/RT06'
*.db_files=2000
*.db_name='OT06'
*.db_recovery_file_dest='+RECO'
Is there any thing wrong in this parameter.
I tried this parameter before for cloning using rman backup. This din't work.
What exactly must be done? for db_file_name_convert to work.
Even in this case i think this is the problem its not converting the location and the logical halts.
help me out.....
let me know if you have any questions.
Thanks Regards
Raghavendra rao Yella.Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Data guard - logical standby in no archive log
We are creating a logical standby for reporting purpose only. I see no reason why it should be in archivelog mode. Would this hamper our current situation?
the solution is already approved , the only thing i could do is wonder if backup is necessary for the logical.
Its not going to be used for any purpose other than reporting.
From what i understand, no archivemode should not affect it. Is this undertanding true -
Logical Standby Stops Applying Up to a Point 10GR2
Hi, I'm running a standby on 10.2.0.2
There are no sequence gaps. I registered all the datafiles so it sees the ones before and after, yet in OEM it shows this:
Log Status ResetLogs ID # First Change # (SCN) Last Change # (SCN) Size (KB)
35334 Committed Transactions Applied 688864038 4819403033 4819404135 10782
35335 Partially Applied 688864038 4819404135 4819404151 92
35336 Not Applied 688864038 4819404151 4819404179 87
Alert log:
ora-01281 scn range specified is invalid.
I have tried doing a recover until scn # to no avail.Hello;
I believe i would review and follow this Oracle document:
(UNREGISTER logfile On Logical Standby (Doc ID 1416433.1))
Best Regards
mseberg -
When i try to download an update from the appstore the dropdown menju automatically inserts my apple id (which is wrong). It's in gray and will not allow me to type in the correct id. How do i stop the auto insert so that i can enter the correct id?
the original apple id was entered incorrectly. Apple says it must be a email address. My email address is correct but the letter ( t ) somehow got added to the (.com) so that when i try to enter my id it is automaticly loaded with the email address the ends ( .comt ) . According to Apple this is not a "legal" email id. But it appears on my dropdown in gray and i cannot alter it or delete it, thus preventing me from downloading software updates.
-
How to delete archive logs on the standby database....in 9i
Hello,
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
thanks,
C.We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
*Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
Before deleting archives
1) you need to check is all the archives are applied or not
2) then you can remove all the archives completed before 'sysdate-2';
RMAN> delete archvielog all completed before 'sysdate-2';
As per your requirement. -
Refresh Schema in Primary (and Logical Standby)
Does anyone have a recommendation for the best way to refresh a 16Gb schema in a Primary Database that has a logical standby?
I attempted to refresh our main schema in our Primary by using IMPDP. I tried to follow the same steps as I would if I had to upgrade a Primary database with a Logical Standby in place. I defered the Log archive log dest state associated with the logical standby, stopped the SQL Apply on the standby and disabled Data Guard, then dropped the schema in the Standby, created and imported the schema. i then performed the drop, recreate and import in the Primary. After that I did a DBMS_LOGSTDBY.BUILD on the Primary. Finally I enabled DG, restarted the apply and enabled the log arch dest on the primary.
One issue I did have was that i did not defer the archivelogs until after the import had started so it did send some archives over - of course DG was disabled and the apply was off but now the status of DG is normal but i have a big gap in Last Received Log and Last Applied Log and of the missing archives, the oldest have 'Committed Transactions Applied', newer ones have 'Not Applied' and the newest have 'Not Received'.
I think I'm hosed but I am confused on the best approach. I did try just to perform the import on the Primary previously (last year) but I remember that the volume of data killed the replication.
This is 10.2.0.4 on Windows 2003 Server (64bit)
Thanks
Graemei suppossed that you did : alter database open resetlog; right?
In first place, try to see any error in redo transport:
alter system switch logfile;
select status, error from v$archive_dest where dest_id = 2;
any error?
for more information, please check:
http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_45.shtml -
Hi Friends,
I have 4 doubts. Please help me to clear my confusion.
Doubt 1:-
I have one primary and one logical standby database. When i restart the primary database or logical standby database or whenever data is not transffering to standby database, I execute the package "EXECUTE DBMS_LOGSTDBY.BUILD;". Then it starts to transfer the data to standby. My question is executing this package many times will affect the primary database? or my configuration is wrong?
Doubt 2:-
I configured DGMGRL for both database and i can switch over and switch back to logical standby database. It automatically start to apply the data to new logical standby database. But when i switch back to old primary database, It didnt transfer the data automatically. So i execute the package "EXECUTE DBMS_LOGSTDBY.BUILD;" in primary. Then it starts to transfer the data. My question is my switchback is wrong? or it happens like that only?
Note:- I check the verbose of both database before switchback. It show success.
Doubt 3:-
When i check the switchover status in logical standby database, It show "NOT ALLOWED". Even though i can switch over by using dgmgrl. Why logical standby database showing "NOT ALLOWED"?
Doubt 4:-
Is it any possible to view that what commands executing by dgmgrl process in background while it switchovering to logical standby database.?
Because when i try to switchover manually it didnt work. Many times i tried. But in dgmgrl, I simply give "switchover to logdb;", It works.
Please help to me to understand clearly.
Thank you in advance.Wrong forum mate. This forum deals with SQL and PL/SQL languages and related issues - not database administration and configuration issues.
The shared memory error is not unusual though ito SQL and PL/SQL. It is often a result of not using sharable SQL - in other words, SQL statements that are not using bind variables.
This causes additional consumption and fragmentation of shared pool memory. The very worse thing to do in this case would be to increase the size of the shared pool. That is akin to moving the wall a bit further away so that one can run even faster and harder into it.
But I'm merely speculating as you may have another cause for this problem. I suggest that you:
- post this in the [url http://forums.oracle.com/forums/forum.jspa?forumID=61]Database Forum
- research this error on [url http://metalink.oracle.com]Metalink -
Hi,
how can configure the logical standby process to start automatically after i restart database?
thanksWhat do you consider the "logical standby process"? The process that copies log files from the primary to the standby? Or the process that applies the log files to the standby database?
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Logical standby | archive log deleted | how to remove gap ???
hi gurus...
i have problem on logical standby
by mistake standby log coming to logical standby has been deleted , now how to fill up the gap ???
ON STANDBY
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# APPLIED
228 674847 674872 YES
229 674872 674973 CURRENT
230 674973 674997 NO
231 674997 675023 NO
232 675023 675048 NO
233 675048 675109 NO
234 675109 675135 NO
235 675135 675160 NO
236 675160 675183 NO
237 675183 675208 NO
238 675208 675232 NO
239 675232 675257 NO
240 675257 675282 NO
241 675282 675382 NO
242 675382 675383 NO
243 675383 675650 NO
244 675650 675652 NO
245 675652 675670 NO
246 675670 675688 NO
247 675688 675791 NO
248 675791 678524 NO
archive log are shipping to standby location also and getting registered
ALERT LOG OF STANDBY
Fri May 7 12:25:36 2010
Primary database is in MAXIMUM PERFORMANCE mode
RFS[21]: Successfully opened standby log 5: '/u01/app/oracle/oradata/BEST/redo05.log'
Fri May 7 12:25:37 2010
RFS LogMiner: Registered logfile [u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_248_1_715617824.dbf] to LogMiner session id [1]
but i dont have standby log after 229 sequence ...
ON PRIMARY
SYS@TEST AS SYSDBA> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/flash_recovery_area/TEST/standlogOldest online log sequence 247
Next log sequence to archive 249
Current log sequence 249
what to do next to apply sequences and bring both in sync.
please help me ,,,,
Edited by: user12281508 on May 7, 2010 9:45 AMthanks for response.
no its pure logical standby
i have tried to ftp the archive logs of primary to standby and applied manually
SYS@BEST AS SYSDBA> alter database register logfile '/u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_230_1_715617824.dbf';
alter database register logfile '/u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_230_1_715617824.dbf'
ERROR at line 1:
ORA-01289: cannot add duplicate logfile
SYS@BEST AS SYSDBA> alter database register logfile '/u01/app/home/archTEST_230_1_715617824.dbf';
alter database register logfile '/u01/app/home/archTEST_230_1_715617824.dbf'
ERROR at line 1:
ORA-01289: cannot add duplicate logfile
any other way ???? -
Stop/start logical standby db
Version 10203 on AIX
Have to stop/start logical standby db, new to data guard, Please confirm these are the steps to do that
on primary db
SQL > alter system switch logfile ;
SQL > alter system archive log current ; ( to make sure current transactions come thru)
check tail of alert log of standby to make sure these redologs shipped & mined
standby db
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; (stop SQL Apply)
SQL> shutdown immediate;
Lsnrctl stop listener_corp_remrpt-haprimary db
SQL > shutdown immediate ;
Lsnrctl stop listener_corp_remprd-ha
Dont shutdown abort for any case, if both dbs are going down, first stop SQL apply on standby, take primary down and then take standby down)
Startup
primary
SQL>startup;
Lsnrctl start listener_corp_remprd-ha
Standby
SQL > startup
SQL > alter database start logical standby apply immediate ;
Lsnrctl start listener_corp_remrpt-haHi
As you posted ,you are using Real Time Sql apply,So,it is LGWR who transfer changes to standby site.It is very safe to follow these steps for new user.
1.Stop logical standby apply(Standby Database).
2.Shutdown Primary Database.
3.Shutdown logical standby Database.
At startup
1.Start Logical Standby Database.
2.Start Primary Database.
3.Start logical standby apply.
At the case when primary database is taking long time to shutdown processed ,You can also use
shutdown abort,but before doing abort be sure you have stop logical standby apply.When your primary database started,it automatically perform instance recovery.Primary site have to resolve gap in this case.
At the case when you must have to perform shutdown abort at primary database,you can do it.By doing it you will not loose anything.Primary database has to resolve gap and it will take time to be consistent with primary site.
Tinku -
Logical Standby - Log auto delete
Hello Everybody,
We are having a logical standby database which is working fine.. But the only concern is with deletion of archive logs that are applied..
I wanted to auto delete the archives that are applied.. Till yesterday all the archives that had applied was getting deleted automatically.. But now some how its not getting deleted..
Nothing has been changed since yesterday and i even checked alert log no error has been recorded but it just simply doesn't delete the archives..
For a try i restarted SQL apply, its applying the archive logs but it doesn't delete the archives that are applied..
Please any help or suggestions would be great.
ThanksThanks for your response..
As per the links you provided, these links are for archive deletion policy for primary and physical standby which we configure using RMAN..
But in Logical Standby case archive logs are automatically deleted after its applied...
So when ever we start sql apply following are the alert entries and it should automatically delete archives after it gets applied..
ALTER DATABASE START LOGICAL STANDBY APPLY
Thu Aug 20 16:34:14 2009
ALTER DATABASE START LOGICAL STANDBY APPLY (oracle)
Thu Aug 20 16:34:14 2009
No optional part
Attempt to start background Logical Standby process
LSP0 started with pid=23, OS id=4715
Thu Aug 20 16:34:15 2009
LOGSTDBY Parameter: LOG_AUTO_DELETE = TRUE
Completed: ALTER DATABASE START LOGICAL STANDBY APPLY
Thu Aug 20 16:34:15 2009
LOGSTDBY status: ORA-16111: log mining and apply setting up
Thu Aug 20 16:34:15 2009
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
LOGMINER: session# = 1, reader process P000 started with pid=24 OS id=4717
LOGMINER: session# = 1, builder process P001 started with pid=25 OS id=4719
LOGMINER: session# = 1, preparer process P002 started with pid=26 OS id=4721
Thu Aug 20 16:34:15 2009
LOGMINER: Begin mining logfile: /u01/app/oracle/archive/oracle/ARCH_ORACL_1309_1_692680471.arc
Thu Aug 20 16:34:15 2009
LOGMINER: Turning ON Log Auto Delete
Thu Aug 20 16:34:15 2009But in my case its not deleting archives what could be the reason..
Please any help would be great -
How to delete the foreign archivelogs in a Logical Standby database
How do I remove the foreign archive logs that are being sent to my logical standby database. I have files in the FRA of ASM going back weeks ago. I thought RMAN would delete them.
I am doing hot backups of the databases to FRA for both databases. Using ASM, FRA, in a Data Guard environment.
I am not backing up anything to tape yet.
The ASM FRA foreign_archivelog directory on the logical standby FRA keeps growing and nothing is get deleted when
I run the following command every day.
delete expired backup;
delete noprompt force obsolete;
Primary database RMAN settings (Not all of them)
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD' CONNECT IDENTIFIER 'WMRTPRD_CWY';
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD2' CONNECT IDENTIFIER 'WMRTPRD2_CWY';
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD3' CONNECT IDENTIFIER 'WMRTPRD3_DG';
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
Logical standby database RMAN setting (not all of them)
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
How do I cleanup/delete the old ASM foreign_archivelog files?OK, the default is TRUE which is what it is now
from DBA_LOGSTDBY_PARAMETERS
LOG_AUTO_DELETE TRUE SYSTEM YES
I am not talking about deleting the Archive logs files for the Logical database that it is creating, but the Standby archive log files being sent to the Logical Database after they have been applied.
They are in the alert log as follows under RFS LogMiner: Registered logfile
RFS[1]: Selected log 4 for thread 1 sequence 159 dbid -86802306 branch 763744382
Thu Jan 12 15:44:57 2012
*RFS LogMiner: Registered logfile [+FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297] to LogM*
iner session id [1]
Thu Jan 12 15:44:58 2012
LOGMINER: Alternate logfile found. Transition to mining archived logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/
foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297
LOGMINER: End mining logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/threa
d_1_seq_158.322.772386297
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 159, +DG1/wmrtprd2/onlinelog/group_4.284.771760923 -
Skip the DELETE command on logical standby
Hi All,
I want to skip the DELETE command on logical standby.
DB Version - 10.2
OS - Linux
Primary DB and logical standby DB .
In our DB schema some transaction tables. We delete data from those tables by delete commands.
Delete command, also delete data from logical standby DB. But we want to skip on logical standby DB .
I use following for that and get error.
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
EXECUTE DBMS_LOGSTDBY.SKIP (stmt =>'DELETE TABLE', schema_name =>'TEST',object_name =>'TRANS',proc_name => null);
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
But I got error
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'SKIP'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
When I change stmt =>'DELETE TABLE' to stmt =>'DML', no error happen
Please help me to solve this issue . This is urgent.
Thanks in advance.
RegardsDear aditi2,
Actually it is so simple to understand the problem. Please read the following documentation and try to understand the SKIP procedure.
http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_lsbydb.htm#997290
*SKIP Procedure*
Use the SKIP procedure to define filters that prevent the application of SQL statements on the logical standby database.
By default, all SQL statements executed on a primary database are applied to a logical standby database.
If only a subset of activity on a primary database is of interest for application to the standby database,
you can use the SKIP procedure to define filters that prevent the application of SQL statements on the logical standby database.
While skipping (ignoring) SQL statements is the primary goal of filters,
it is also possible to associate a stored procedure with a DDL filter so that runtime determinations can be made whether to skip the statement,
execute this statement, or execute a replacement statement.
Syntax
DBMS_LOGSTDBY.SKIP (
stmt IN VARCHAR2,
schema_name IN VARCHAR2,
object_name IN VARCHAR2,
proc_name IN VARCHAR2,
use_like IN BOOLEAN,
esc IN CHAR1);Hope That Helps.
Ogan
Edited by: Ogan Ozdogan on 30.Tem.2010 13:03
Maybe you are looking for
-
UDDI API -- findService (WLS 7.0)
Group, I have published a Web Service in the UDDI Directory Explorer. Is there any way using the WebLogic UDDI client API to do a search on keywords in the service description? The API allows finding a service by name, by category, by tModel, by key.
-
Can iCloud Upload be Throttled?
I have a large (170 gig) photo library that I'd like to upload to iCloud via Yosemite Photos. The problem is that the upload is overwhelming my modem, effectively crashing it (my cable co says it's going into a "no ping state"). When this happens,
-
Hello! I've got an interesting phenomenon. When editing a resource in the NWBC and pressing "Save" I always get the error 'CX_SY_REF_IS_INITIAL' . Sometimes the changed information is saved, sometimes not. I tried it with or without cost rate. Changi
-
LabIEW 6i Pro and LabVIEW RT 5.1
I have labview Profesisonal development system 6i and labVIEW RT5.1, how can I use RT capabilities in labview 6i? Specifically, I want to use the functionality of LVRT(5.1) with LV6i as I have valid licenses for both.
-
WRT610N USB port locks up during file copy
Brand new wrt610n works like a charm except for the USB port. We bought the router specifically for that feature and we got a small USB drive to attach to it. Problem is, when backing up a workstation using just file copy of no more than 5G of data