Instantiation and start_scn of capture process
Hi,
We are working on stream replication, and I have one doubt abt the behavior of the stream.
During set up, we have to instantiate the database objects whose data will be transferrd during the process. This instantiation process, will create the object at the destination db and set scn value beyond which changes from the source db will be accepted. Now, during creation of capture process, capture process will be assigned a specific start_scn value. Capture process will start capturing the changes beyond this value and will put in capture queue. If in between capture process get aborted, and we have no alternative other than re-creation of capture process, what will happen with the data which will get created during that dropping / recreation procedure of capture process. Do I need to physically get the data and import at the destination db. When at destination db, we have instantiated objects, why not we have some kind of mechanism by which new capture process will start capturing the changes from the least instantiated scn among all instantiated tables ? Is there any other work around than exp/imp when both db (schema) are not sync at source / destination b'coz of failure of capture process. We did face this problem, and could find only one work around of exp/imp of data.
thanx,
Thanks Mr SK.
The foll. query gives some kind of confirmation
source DB
SELECT SID, SERIAL#, CAPTURE#,CAPTURE_MESSAGE_NUMBER, ENQUEUE_MESSAGE_NUMBER, APPLY_NAME, APPLY_MESSAGES_SENT FROM V$STREAMS_CAPTURE
target DB
SELECT SID, SERIAL#, APPLY#, STATE,DEQUEUED_MESSAGE_NUMBER, OLDEST_SCN_NUM FROM V$STREAMS_APPLY_READER
One more question :
Is there any maximum limit in no. of DBs involved in Oracle Streams.
Ths
SM.Kumar
Similar Messages
-
CAPTURE process error - missing Archive log
Hi -
I am getting cannot open archived log 'xxxx.arc' message when I try to start a newly created capture process. The archive files have been moved by the DBAs.
Is there a way to set the capture process to start from a new archive ?
I tried
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE ( capture_name => 'STRMADMIN_SCH_CAPTURE', start_scn =>9668840362577);
I got the new scn from DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();.
But I still get the same error.
Any ideas ?
Thanks,
SadeepaIf you are on 9i, I know that trying to reset the scn that way won't work. You have to drop and recreate the capture process. You can leave all the rules and rulesets in place, but I think you have to prepare all of the tables again.
-
Location of Capture Process and Perf Overhead
Hi,
We are just starting to look at Streams technology. I am reading the doc and it implies that the capture process is run on the source database node. I am concerned of the overhead on the OLTP box. I have a few questions I was hoping to get clarification on.
1. Can I send the redo log to another node/db with data dictionary info and run the capture there? I would like to offload the perf overhead to another box and I thought Logminer could do it, so why not Streams.
2. If I run the capture process on one node/db can the initial queue I write to be on another node/db or is it implicit to where I run the capture process? I think I know this answer but would like to hear yours.
3. Is there any performance atomics on the cost of the capture process to an OLTP system? I realize there are many variables but am wondering if I should even be concerned with offloading the capture process.
Many thanks in advance for your time.
Regards,
TomIn the current release, Oracle Streams performs all capture activities at the source site. The ability to capture the changes from the redo logs at an alternative site is planned for a future release. Captured changes are stored in an in-memory buffer queue on the local database. Multi-cpu servers with enough available memory should be able to handle the overhead of capture.
-
Capture Process hangs and LOGMINER Stops - "wait for transaction" ???
HI all
Any ideas why the LOGMINRER would stop mining the logs at the capture site DB (just hangs 40 short of current archivelog)
Capture Process is a status of Capturing Changes
And the wait event on the Capture Process is "wait for transaction"
How to diagnose whats wrong with Capture Process - been this way for 4 days !Hi
Yes we have had to explicity register archivelogs also.
Unfortunately this archivelog is registered -> so I am not sure. It apapers to have been as a result of a large DML transaction -> and I am not 100% sure the archivelog is posibly corrupt (however I doubt it as in 5 years as DBA I have not once hit a corruption - but always a first).
Any thoughts on how to proceed ? -
Getting Updated image through Config manager Build and capture process
Hi guys,
Looking for some pointers on getting the almost fully patched image out of configuration manager build and capture process.
I need below pointers:
- What and where to advertise\deploy software packages target? since its a new deployment, I believe it should be targeted to all unknown computers.
- How many times should I call 'install update' step in TS and where?
- What is the best way to patch image (if something is missed out during image capture) offline (since its config manager 2012 R2). when I pull schedule update list for offline schedule, does that only pull updates that are required for that image or it
pulls everything available as updates?
- I have done offline patch of image but still once the image is deployed it pulls office updates. Looks like , offline service just injects OS patches..right?
Regards,
Regards,- Where? That depends upon what you what. But, going by the letter of what you asked, you shouldn't deploy any applications or packages if you want them on your newly images systems, you should make them a part of the TS using Install Software and Install
Application tasks.
- How many times? You should only need one although depending upon what you are deploying and if you are using offline updates, some folks add more. I typically use two one right after the Setup Windows and ConfigMgr task (just like the task sequence wizard
builds for you) to update the OS and "things" in the image and one at the end to catch additional updates for apps and components added during the TS. Note that you also have to initiate an update scan cycle for additional install update tasks to work. Do
this by adding a run-command line task before the second Install Updates task with the proper WMIC incantation.
- Offline Updates only injects CBS updates into the image which are typically just core OS updates. You should also update your image though to include non-CBS updates. Using a build and catpure task sequence makes this easy.
Jason | http://blog.configmgrftw.com -
Oralce streamsSystem Change Number (SCN) and capture process
Do we have get SCN before capture process is started ? If yes , from where does the replication is started .
WIll replication process startright from the time when SCN is captured
OR
will replicaion start from the time when the capture process is started
Edited by: [email protected] on Mar 26, 2009 6:04 PMI am trying to setup oracle streams to enable replication for a set of tables.
One of the step as per the doc is is to setup/get SCN and its acheived by the following peice of code.
CONNECT STRMADMIN/STRMADMINPW@<CONNECT_STRING_SOURCE>
DECLARE
V_SCN NUMBER;
BEGIN
V_SCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DB_LINK_TARGET_DB(
SOURCE_OBJECT_NAME => '<SCOTT.EMP>',
SOURCE_DATABASE_NAME => 'SOURCE_DATABASE',
INSTANTIATION_SCN => V_SCN);
END;
STRMADMIN : Is a genenice user account (streams administrator) to manage oracle streams. -
Several captures processes per queue
Hi everybody,
I've been trying out stream configuration on 10gR2 according to the tutorial of Sanjay Mishra on OTN and ... everything works great.
But...
I've tried to configure several applies processes reading from one queue, and several capture sending data to one single queue, and I can't get it to work. Is there something special to take care of - or is it supposed to be so ?
Thanks in advance - NicolasHi Nicolas,
If all the tables are stored on the same source database, you can create only 1 capture process and use the rules to filter the tables you'd like to include in your replication.
For example, let's suppose you have a schema called 'HR' and you would like to capture the changes for tables TAB1 and TAB2 only.
After setting your queue you have to follow these steps:
1) Create a rule set
begin
dbms_rule_adm.create_rule_set(rule_set_name => 'capture_ruleset',
evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT');
end;
2) Create the rules for TAB1 and TAB2
begin
dbms_rule_adm.create_rule(rule_name =>'TAB1_RULE',
condition => ':dml.get_object_owner()=''HR'' and :dml.get_object_name()=''TAB1''');
dbms_rule_adm.create_rule(rule_name =>'TAB2_RULE',
condition => ':dml.get_object_owner()=''HR'' and :dml.get_object_name()=''TAB2''');
end;
3) Add the rules to your rule set
begin
dbms_rule_adm.add_rule(rule_set_name=>'CAPTURE_RULESET',rule_name=>'TAB1_RULE');
dbms_rule_adm.add_rule(rule_set_name=>'CAPTURE_RULESET',rule_name=>'TAB2_RULE');
end;
4) Create the capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'streams_queue',
capture_name => 'my_capture',
rule_set_name => 'capture_ruleset',
start_scn => NULL,
source_database => NULL,
use_database_link => false,
first_scn => NULL);
END;
And that's it!
If you need to add another table, you just create a new rule and add it to your capture rule set. Also,don't forget to prepare the table for instantiation and to setup supplemental logging.
Let me know if you have any questions,
Aldo -
Error running Archived-Log Downstream Capture Process
I have created a Archived-Log Downstream Capture Process with ref. to following link
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
After executing the capture process get following error in trace
============================================================================
Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-194.el5
Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
Machine: x86_64
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 37
Unix process pid: 13572, image: [email protected] (CP01)
*** 2011-08-20 14:21:38.899
*** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
*** CLIENT ID:() 2011-08-20 14:21:38.899
*** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
*** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
*** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
knlcCopyPartialCapCtx(), setting default poll freq to 0
knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
knlcObtainRuleSetNullLock: rule set name
knlcmaInitCapPrc+
knlcmaGetSubsInfo+
knlqgetsubinfo
subscriber name EMP_DEQ
subscriber dblinke name
subscriber name APPLY_EMP
subscriber dblinke name
knlcmaTerm+
knlcmaTermSrvs+
knlcmaTermSrvs-
knlcmaTerm-
knlcCCAInit()+, err = 26802
knlcnShouldAbort: examining error stack
ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
knlcnShouldAbort: examing error 26802
knlcnShouldAbort: returning FALSE
knlcCCAInit: no combined capture and apply optimization err = 26802
knlzglr_GetLogonRoles: usr = 91,
knlqqicbk - AQ access privilege checks:
userid=91, username=STRMADMIN
agent=STRM05_CAPTURE
knlqeqi()
knlcRecInit:
Combined Capture and Apply Optimization is OFF
Apply-state checkpoint mode is OFF
last_enqueued, last_acked
0x0000.00000000 [0] 0x0000.00000000 [0]
captured_scn, applied_scn, logminer_start, enqueue_filter
0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
flags=0
Starting persistent Logminer Session : 13
krvxats retval : 0
CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
krvxssp retval : 0
krvxsda retval : 0
krvxcfi retval : 0
#1: krvxcfi retval : 0
#2: krvxcfi retval : 0
About to call krvxpsr : startscn: 0x0000.0004688c
state before krvxpsr: 0
dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
*** 2011-08-20 14:21:41.810
Begin knlcDumpCapCtx:*******************************************
Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
Capture Name: STRM05_CAPTURE : Instantiation#: 65
*** 2011-08-20 14:21:41.810
++++ Begin KNST dump for Sid: 146 Serial#: 2274
Init Time: 08/20/2011 14:21:38
++++Begin KNSTCAP dump for : STRM05_CAPTURE
Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
Capture_Message_Number: 0x0000.00000000 [0]
Capture_Message_Create_Time: 01/01/1988 00:00:00
Enqueue_Message_Number: 0x0000.00000000 [0]
Enqueue_Message_Create_Time: 01/01/1988 00:00:00
Total_Messages_Captured: 0
Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
Total_Full_Evaluations: 0
Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
Apply_Name :
Apply_DBLink :
Apply_Messages_Sent: 0
++++End KNSTCAP dump
++++ End KNST DUMP
+++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
Capture_Type: DOWNSTREAM
Version:
Source_Database: ORCL2.LOCALDOMAIN
Use_Database_Link: NO
Logminer_Id: 13 Logfile_Assignment: EXPLICIT
Status: ENABLED
First_Scn: 0x0000.0004688c [288908]
Start_Scn: 0x0000.0004688c [288908]
Captured_Scn: 0x0000.0004688c [288908]
Applied_Scn: 0x0000.0004688c [288908]
Last_Enqueued_Scn: 0x0000.00000000 [0]
Capture_User: STRMADMIN
Queue: STRMADMIN.STREAMS_QUEUE
Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
Checkpoint_Retention_Time: 60
+++ End DBA_CAPTURE dump
+++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
PARALLELISM = 1 Set_by_User: NO
STARTUP_SECONDS = 0 Set_by_User: NO
TRACE_LEVEL = 7 Set_by_User: YES
TIME_LIMIT = -1 Set_by_User: NO
MESSAGE_LIMIT = -1 Set_by_User: NO
MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
WRITE_ALERT_LOG = TRUE Set_by_User: NO
DISABLE_ON_LIMIT = FALSE Set_by_User: NO
DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
SPLIT_THRESHOLD = 1800 Set_by_User: NO
MERGE_THRESHOLD = 60 Set_by_User: NO
+++ End DBA_CAPTURE_PARAMETERS dump
+++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
+++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
++ LogMiner Session Dump Begin::
SessionId: 13 SessionName: STRM05_CAPTURE
Start SCN: 0x0000.00000000 [0]
End SCN: 0x0000.00046c2d [289837]
Processed SCN: 0x0000.0004689e [288926]
Prepared SCN: 0x0000.000468d4 [288980]
Read SCN: 0x0000.000468e2 [288994]
Spill SCN: 0x0000.00000000 [0]
Resume SCN: 0x0000.00000000 [0]
Branch SCN: 0x0000.00000000 [0]
Branch Time: 01/01/1988 00:00:00
ResetLog SCN: 0x0000.00000001 [1]
ResetLog Time: 08/18/2011 16:46:59
DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
krvxvtm: Enabled threads: 1
Current Thread Id: 1, Thread State 0x01
Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
Current Session State: 0x20005, Current LM Compat: 0xb200000
Flags: 0x3f2802d8, Real Time Apply is Off
+++ Additional Capture Information:
Capture Flags: 4425
Logminer Start SCN: 0x0000.0004688c [288908]
Enqueue Filter SCN: 0x0000.0004688c [288908]
Low SCN: 0x0000.00000000 [0]
Capture From Date: 01/01/1988 00:00:00
Capture To Date: 01/01/1988 00:00:00
Restart Capture Flag: NO
Ping Pending: NO
Buffered Txn Count: 0
-- Xid Hash entry --
-- LOB Hash entry --
-- No TRIM LCR --
Unsupported Reason: Unknown
--- LCR Dump not possible ---
End knlcDumpCapCtx:*********************************************
*** 2011-08-20 14:21:41.810
knluSetStatus()+{
*** 2011-08-20 14:21:44.917
knlcapUpdate()+{
Updated streams$_capture_process
finished knlcapUpdate()+ }
finished knluSetStatus()+ }
knluGetObjNum()+
knlsmRaiseAlert: keltpost retval is 0
kadso = 0 0
KSV 1304 error in slave process
*** 2011-08-20 14:21:44.923
ORA-01304: subordinate process error. Check alert and trace logs
knlz_UsrrolDes()
knstdso: state object 0xb644b568, action 2
knstdso: releasing so 0xb644b568 for session 146, type 0
knldso: state object 0xa6d0dea0, action 2 memory 0x0
kadso = 0 0
knldso: releasing so 0xa6d0dea0
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01304: subordinate process error. Check alert and trace logs
Any suggestions???Output of above query
==============================
CAPTURE_NAME STATUS ERROR_MESSAGE
STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
Alert log.xml
=======================
<msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
pid='30921'>
<txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
ORA-01304: subordinate process error. Check alert and trace logs
</txt>
</msg>
The orcl_cp01_30921.trc has the same thing posted in the first message. -
Oracle stream - first_scn and start_scn
Hi,
My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
Once I will start the capture from which SCN it will start to capture from archive log?
Regards,I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
Regards,
Below is the setup doc.
1.1 Create the Streams Queue
conn STRMADMIN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'NIG_Q_TABLE',
queue_name => 'NIG_Q',
queue_user => 'STRMADMIN');
END;
1.2 Create apply process for the Schema
BEGIN
DBMS_APPLY_ADM.CREATE_APPLY(
queue_name => 'NIG_Q',
apply_name => 'NIG_APPLY',
apply_captured => TRUE
END;
1.3 Setting up parameters for Apply
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
/********** STEP 2.- Downstream capture process *****************/
2.1 Create the downstream capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE (
queue_name => 'NIG_Q',
capture_name => 'NIG_CAPTURE',
rule_set_name => null,
start_scn => null,
source_database => 'PNID.LOUDCLOUD.COM',
use_database_link => true,
first_scn => null,
logfile_assignment => 'IMPLICIT');
END;
2.2 Setting up parameters for Capture
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
2.3 Add the table level rule for capture
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'NIG.BUILD_VIEWS',
streams_type => 'CAPTURE',
streams_name => 'NIG_CAPTURE',
queue_name => 'STRMADMIN.NIG_Q',
include_dml => true,
include_ddl => true,
source_database => 'PNID.LOUDCLOUD.COM'
END;
/**** Step 3 : Initializing SCN on Downstream database—start from here *************/
import
=================
impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
/********** STEP 4.- Start the Apply process ********************/
sqlplus STRMADMIN
exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY'); -
Resetting SCN from removed Capture Process
I've come across a problem in Oracle Streams where the Capture Processes seem to get stuck. There are no reported errors in the alert log and no trace files, but the capture process fails to continue capturing changes. It stays enabled, but in an awkward state where the OEM Console reports zeros across the board (0 messages, 0 enqueued), when in fact there had been accurate totals in the past.
Restarting the Capture process does no good. The Capture process seems to switch its state back and forth from Dictionary Initialization to Initializing and vice versa. The only thing that seems to kickstart Streams again is to remove the Capture process and recreate the same process.
However my problem is that I want to set the start_scn of the new capture process to the captured_scn of the remove capture process so that the new one can start from where the old one left off? However, I'm getting an error that this cannot be performed (cannot capture from specified SCN).
Am I understanding this correctly? Or should the new Capture process start from where the removed left off automatically?
ThanksHi,
I seem to have the same problem.
I now have a latency of round about 3 days while nothing happened in the database so I want to be able to set the capture process to a further SCN. Setting the Start_SCN gives me an error (can't remember it now unfortunately). Somethimes it seems that the capture process gets stuck in an archived log. It then takes a long time for it to go further and when it goes further it sprints through a bunch of logs before it gets stuck again. During that time all the statuses look good, no heavy cpu-usage is monitored. We saw that the capture-builder has the highest cpu-load, where I would expect the capture-reader to be busy.
I am able to set the first_scn. So a rebuild of the logminer dictionary might help a bit. But then again: why would the capture process need such a long time to process the archived-logs where no relevant events are expected.
In my case the Streams solution is considered as a candidate for a replication solution where Quest's Sharedplex is considered to be expensive and unable to meet the requirements. One main reason it is considered inadaquate is that it is not able to catch up after a database-restart of a heavy batch. Now it seems that our capture-process might suffer from the same problem. I sincerly hope I'm wrong and it proofs to be capable.
Regards,
Martien -
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
Capture process not enqueuing any message
DB Version : 10.2.0.4
I am trying to setup a simple capture process. The streams administrator (say APPS) is the queue owner as well as base table owner. I perform the following steps :
1. Setup the queue using dbms_streams_adm.set_up_queue
2. Add a table rule using dbms_streams_adm.add_table_rules
3. Start the capture process using dbms_capture_adm.start_capture ( or from EM console)
I am not instantiating the base table explicitly, as it is documented in the admin guide that 'dbms_streams_adm.add_table_rules' will implicitly do the same.
Once I insert / update the base table, v$streams_capture shows TOTAL_PREFILTER_KEPT getting incremented properly.
Still I don't see any message getting enqueued in the streams queue.
Appreciate some help to diagnose the issue.Hello
Do you have either a propagation defined on the same capture queue (streams queue) or an apply process on the capture queue? From your explanation, it looks like you have only a streams queue and a captrue process with certain capture rules. Let me know.
Thanks,
Rijesh -
Capturing DVCAM in FCP 6.0.2 and encountering strange capture behavior
I have FCP 6.0.2 and OSX 10.5.2 and QT 7.3.1. I have been capturing several DVCAM cassettes using my Sony DSR-20 deck. Although I have done this countless times before in earlier versions of FCP, I am encountering some strange repetitive behavior. I am capturing 30 minute clips one at a time. When I use batch capture it will cue the tape up properly to the in point...and then start capturing until it gets to about 10-12 minutes in, and then capture unexpectedly stops, no dialogue box, the tape rewinds and starts capturing again from the original in point. On this second capture, the tape sails past the 10 minute mark and keeps going to the end of the 30 minute clip. It then stops, gives me the dialogue box that it has successfully captured. And it has.
But every DVCAM tape I captured today exhibited the same behavior. Capture would be successful until about about 10 minutes in, then FCP aborts (no dropped frame message, no dialogue box) rewinds the tape back to the in point, tries again, and this time succeeds with the second pass capturing the entire clip. Note at the 10 minute mark there is no scene change or no camera start/stop.
Have other users experienced this issue? And if so, is there a workaround or a possible patch forthcoming from FCP?
Many thanks,
JohnYes, each tape has an in and out point defined. In my 6 years of editing with Final Cut and DVCAM tapes I've never encountered this issue before in the capturing process until now. I will have to see in future weeks with other captures whether this is an on-going issue or not, but at least I can capture for now.
-
Excise invoice capture process
Hi,
I want to know about excise invoice capture process for depot plant which t. cod eis use for depot plant how to do the part1 and part2 and also reversal process for the same.
also what is diff. between excis einvoice capture process for depot and non depot plant.
regards,
zafarHi Zafar,
There are no part 1 and part 2 in RG23D for depot scenario. You can update RG23D at the time of MIGO or J1IG "Capture excise invoice for depot".
For cancelling you can use the same transaction. And to send the goods out from Depot plant use T-code J1IJ for updating RG23D.
Rest process remains the same Extraction J2I5 and print through J2I6.
BR -
Internal Error when creating Capture Process
Hi,
I get the following when trying to create my capture process:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
2 3 queue_table => 'capture_queue_table',
queue_name => 'capture_queue');
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'apply_queue_table',
queue_name => 'apply_queue');
END;
4 5 6 7 8 9 10 11
BEGIN
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcbgtcr_4], [32492], [0], [1], [],
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 408
ORA-06512: at line 2
Any ideas?
Cheers,
WarrenMake sure that you have upgraded to the 9.2.0.2 patchset and, as part of the migration to 9202, that you have run the catpatch.sql script.
Maybe you are looking for
-
Satellite A355-S6879 Static in speakers
My Satellite laptop is about 1 year old. I have about 3 weeks left of warranty on it. About 1 month ago, I could hear an annoying static-like, crackling sound coming out of the speakers. I took it to an authorized repair facility close to my home and
-
Error message: iTunes has encountered a problem and needs to close
When I attempt to open iTunes I get that message. I recently had some work done on my computer and had stopzilla installed but I'm not sure if that has anything to do with my problem. I have re-installed and repaired iTunes. pls helP
-
I am running os x 10.7.5, on a macbk, I have been using all the apps fine and not changed anything or run any software updates, suddenly my mail dows not think it can connect with the outgoing server for icloud and requests my password, which I give,
-
Urgent Confused!!!
Ok A few days ago I asked about moving the SLD from Dev Box to Prod Box. NO I noticed that in all the answers nothing was mentioned aboput the SLD itself... There was alot said about what had to be done indifferent systems but not the SLD itself... I
-
DVD Writer not working.
I using Windows XPSP2 Professonal. My DVD Writer Read any CD/DVD but it cant write. I write that DV D/CD in other PC, Which DVD/CD are not write in my PC. I have one year warranty, 5 Month left.