Limit the Capture process to just INSERTS
Hi,
Source: 10.2.0.3
Downstream Capture DB: 10.2.0.3
Destination DB: 11.1.0.7
Is it possible to limit the Streams Capture process to only include INSERTS? We are only interested in INSERTS into the table and are not concerned with capturing any updates or deletes that are performed against the table.
When configuring the capture and apply I've set:
include_dml => true,
Is it possible to have the capture and apply process run at a finer granuality and just capture and apply the INSERTS that have been performed against the source database tables?
Thanks in advance.
Go to Morgan's Library at www.psoug.org and look up DBMS_STREAMS_ADM.
Scroll down to where the demo shows "and_condition => ':lcr.get_command_type() != ''DELETE''');"
That should point you in the right direction.
Similar Messages
-
Rman-08137 can't delete archivelog because the capture process need it
When I use the rman utility to delete the old archivelog on the server ,It shows :Rman-08137 can't delete archivelog because the capture process need it .how to resolve the problem?
It is likely that the "extract" process still requires those archive logs, as it is monitoring transactions that have not yet been "captured" and written out to a GoldenGate trail.
Consider the case of doing the following: ggsci> add extract foo, tranlog, begin now
After pressing "return" on that "add extract" command, any new transactions will be monitored by GoldenGate. Even if you never start extract foo, the GoldenGate + rman integration will keep those logs around. Note that this GG+rman integration is a relatively new feature, as of GG 11.1.1.1 => if "add extract foo" prints out "extract is registered", then you have this functionality.
Another common "problem" is deleting "extract foo", but forgetting to "unregister" it. For example, to properly "delete" a registered "extract", one has to run "dblogin" first:
ggsci> dblogin userid <userid> password <password>
ggsci> delete extract foo
However, if you just do the following, the extract is deleted, but not unregistered. Only a warning is printed.
ggsci> delete extract foo
<warning: to unregister, run the command "unregister...">
So then one just has to follow the instructions in the warning:
ggsci> dblogin ...
ggsci> unregister extract foo logretention
But what if you didn't know the name of the old extracts, or were not even aware if there were any existing registered extracts? You can run the following to find out if any exist:
sqlplus> select count(*) from dba_capture;
The actual extract name is not exactly available, but it can be inferred:
sqlplus> select capture_name, capture_user from dba_capture;
<blockquote>
CAPTURE_NAME CAPTURE_USER
================ ==================
OGG$_EORADF4026B1 GGS
</blockquote>
In the above case, my actual "capture" process was called "eora". All OGG processes will be prefixed by OGG in the "capture_name" field.
Btw, you can disable this "logretention" feature by adding in a tranlog option in the param file,
TRANLOGOPTIONS LOGRETENTION DISABLED
Or just manually "unregister" the extract. (Not doing a "dblogin" before "add extract" should also work in theory... but it doesn't. The extract is still registered after startup. Not sure if that's a bug or a feature.)
Cheers,
-Michael -
Scripting the capture process in Vivado 2015.1 ILA using the tcl
Hello,
I'm wondering how can I use the TCL script in order to automate the capturing process.
The way that it works so far is that I have to change the condition manually through the GUI, start trigrring, waiting for it to upload the waveform (I really would like to bypass this step!) and then downloading the captured data in .csv file. I've tried the attached script to automate two captures, it didn't work though :( I got same results for test0.csv and test1.csv
any ideas?!
set_property TRIGGER_COMPARE_VALUE eq5'u1 [get_hw_probes state_reg__0 -of_objects [get_hw_ilas hw_ila_1]]
wait_on_hw_ila hw_ila_1
run_hw_ila hw_ila_1
wait_on_hw_ila hw_ila_1
write_hw_ila_data -csv_file d:/pss/test0.csv [current_hw_ila_data]
set_property TRIGGER_COMPARE_VALUE eq5'u10 [get_hw_probes state_reg__0 -of_objects [get_hw_ilas hw_ila_1]]
wait_on_hw_ila hw_ila_1
run_hw_ila hw_ila_1
wait_on_hw_ila -timeout 0 hw_ila_1
write_hw_ila_data -csv_file d:/pss/test1.csv [current_hw_ila_data]
Hello Pratham,
I have tried that before. it works perfectly saving one shot of the ILA. However if you just copy and past, it will fail for the 2nd captured data. (it will save the same thing).
What I'm particularly looking is saving more than one sample, let's say I would like to automate for 1000 captures.
The problem that I've encountered follows as this:
When you try to capture more than one sample, it doesn't stop the trigger and when it does it will save the same thing. in order to stop it it needs to upload the waveform (which is really time consuming and I don't need the tool to upload it in order to captuer the .csv data!) so it will slow down the capturing process.
In ChipScope it was very easy to do that! there was an option called repetitive trigger on. and it was easily capturing down repeatedly with log1, log2, ... and so forth. I'm looking for that! -
Is it possible to move some of the capture processes to another rac node?
Hi All,
Is it possible to move some of the ODI (Oracle Data Integrator) capture processes running on node1 to node2. Once moved does it work as usual or not? If its possible please provide me with steps.
Appreciate your response
Best Regards
SK.Hi Cezar,
Thanks for your post. I have a related question regarding this,
Is it really necessary to have multiple capture and multiple apply processes? One for each schema in ODI? Because if set to automatic configuration, ODI seems to create a capture and a related apply process for each schema, which I guess leads to our specific performance problem (high cpu etc) I mentioned in my other post: Re: Is it possible to move some of the capture processes to another rac node?
Is there way to use just one capture and one apply process for all of the schemas in ODI?
Thanks a million.
Edited by: oyigit on Nov 6, 2009 5:31 AM -
It is nice that the search includes all the fields mentioned, but what about when the results are 600 and I tried a few ways to filter it down unsuccessfully?
It would be nice to limit the search to just certain fields too.Windchimes74, oh wow! That is a lot of usage in a short amount of time. We do have a great application called Family Base. You can restrict the amount of time/amount of usage that a particular device can be used. By doing this you can put a limit when Mom is online. Once the jetpack reaches that limit it will not allowed to be downloading anything else. This package is $5 a month for the whole account. It is an awesome feature that I myself use for many lines on my account.
RobinD_VZW
Follow us on twitter @VZWSupport -
Limit the process power for particular process
Hi,
I would like to know how to limit the cpu process for particular application in Solaris 9.
Regards,
WONGhi,
how can i start srm in solaris.
/usr/sadm/lib/wbem/com/sun/wbem/solarisprovider/srm
i found in the above path is directory.
kindly tell me alternate path.
Regards,
Raja -
The (stopped) Capture process & RMAN
Hi,
We have a working 1-table bi-directional replication with Oracle 10.2.0.4 on SPARC/Solaris.
Every night, RMAN backs up the database and collects/removes the archive logs (delete all inputs).
My understanding from (Oracle Streams Concept & Administration) is that RMAN will not remove an archived log needed by a capture process (I think for the logminer session).
Fine.
But now, If I stop the Capture process for a long time (more than a day), whatever the reason.
It's not clear what is the behaviour...
I'm afraid that:
- RMAN will collect the archived logs (since there is no more logminer session because of the stopped capture process)
- When I'll restart the capture process, it will try to start from the last known SCN and the (new) logminer session will not find the redo logs.
If that's correct, is it possible to restart the Capture process with an updated SCN so that I do not run into this problem ?
How to find this SCN ?
(In the case of a long interruption, we have a specific script which synchronize the table. It would be run first before restarting the capture process)
Thanks for your answers.
JDRMAN backup in 10g is streams aware. It will not delete any logs that contain the required_checkpoint_scn and above. This is true only if the capture process is running in the same database(local capture) as the RMAN backup is running.
If you are using downstream capture, then RMAN is not aware of what logs that streams needs and may delete those logs. One additional reason why logs may be deleted is due to space pressure in flash recovery area.
Please take a look at the following documentation:
Oracle® Streams Concepts and Administration
10g Release 2 (10.2)
Part Number B14229-04
CHAPTER 2 - Streams Capture Process
Section - RMAN and Archived Redo Log Files Required by a Capture Process -
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
Location of Capture Process and Perf Overhead
Hi,
We are just starting to look at Streams technology. I am reading the doc and it implies that the capture process is run on the source database node. I am concerned of the overhead on the OLTP box. I have a few questions I was hoping to get clarification on.
1. Can I send the redo log to another node/db with data dictionary info and run the capture there? I would like to offload the perf overhead to another box and I thought Logminer could do it, so why not Streams.
2. If I run the capture process on one node/db can the initial queue I write to be on another node/db or is it implicit to where I run the capture process? I think I know this answer but would like to hear yours.
3. Is there any performance atomics on the cost of the capture process to an OLTP system? I realize there are many variables but am wondering if I should even be concerned with offloading the capture process.
Many thanks in advance for your time.
Regards,
TomIn the current release, Oracle Streams performs all capture activities at the source site. The ability to capture the changes from the redo logs at an alternative site is planned for a future release. Captured changes are stored in an in-memory buffer queue on the local database. Multi-cpu servers with enough available memory should be able to handle the overhead of capture.
-
Capture Process hangs and LOGMINER Stops - "wait for transaction" ???
HI all
Any ideas why the LOGMINRER would stop mining the logs at the capture site DB (just hangs 40 short of current archivelog)
Capture Process is a status of Capturing Changes
And the wait event on the Capture Process is "wait for transaction"
How to diagnose whats wrong with Capture Process - been this way for 4 days !Hi
Yes we have had to explicity register archivelogs also.
Unfortunately this archivelog is registered -> so I am not sure. It apapers to have been as a result of a large DML transaction -> and I am not 100% sure the archivelog is posibly corrupt (however I doubt it as in 5 years as DBA I have not once hit a corruption - but always a first).
Any thoughts on how to proceed ? -
Capture process different on Mac vs. Windows???
Hello -
Regarding the "capturing" process with PPro CC 2014 using the Mac OSX (Mavericks) and Windows 7 Ultimate 64bit:
MAC - When capturing, it does not seem to want to capture the whole tape if there is an unrecorded section of the tape between recorded footage. It assumes that when it comes to that section to stop recording; which it does. At this point, I can click on "cancel" and hit the "record" icon to have the capture process continue. But this breaks the footage into separate files; not what I am wanting or needing.
WINDOWS - When doing the same as above, the whole tape will be recorded / captured ... unrecorded sections and all ... as one file.
Scene Detect is not selected in either OS.
Is there a setting that is different on the Mac version vs. the Windows version that I have overlooked or is this normal?Have the workaround: create PostScript file to distil in Distiller.
The problem seems to be with InDesign's PDF presets commands.
The problem also displays on my Apple 17in laptop, so it is not just a PC problem. -
Capture process aborted with ORA-00604
Hi all,
I am new to oracle streams and trying to set up streams on single DB instance
from one schema to other.
the capture process always aborts with the following error messsages as soon as i make change in the associated table.I was following the demo in the link
http://blogs.ittoolbox.com/oracle/guide/archives/oracle-streams-configuration-change-data-capture-13501.
I executed exec DBMS_CAPTURE_ADM.BUILD();
procedure also for setting up the log miner data dictionary but it also does not help.
ORA-00604: error occurred at recursive SQL level 1
ORA-00979: not a GROUP BY expression
ORA-06512: at "SYS.LOGMNR_KRVRDLUID3", line 1799
ORA-06512: at line 1
Any one can help me on this,since i am stuck here.
thanks in advance.I would use 10.2.0.3 version with Streams. Just install patch and verify if problem is still exists.
Serge -
Capture process not enqueuing any message
DB Version : 10.2.0.4
I am trying to setup a simple capture process. The streams administrator (say APPS) is the queue owner as well as base table owner. I perform the following steps :
1. Setup the queue using dbms_streams_adm.set_up_queue
2. Add a table rule using dbms_streams_adm.add_table_rules
3. Start the capture process using dbms_capture_adm.start_capture ( or from EM console)
I am not instantiating the base table explicitly, as it is documented in the admin guide that 'dbms_streams_adm.add_table_rules' will implicitly do the same.
Once I insert / update the base table, v$streams_capture shows TOTAL_PREFILTER_KEPT getting incremented properly.
Still I don't see any message getting enqueued in the streams queue.
Appreciate some help to diagnose the issue.Hello
Do you have either a propagation defined on the same capture queue (streams queue) or an apply process on the capture queue? From your explanation, it looks like you have only a streams queue and a captrue process with certain capture rules. Let me know.
Thanks,
Rijesh -
Capture process created by using DBMS_CAPTURE_ADM package won't enqueue
I created Capture process using DBMS_CAPTURE_ADM.CREATE_CAPTURE on Oracle 9.2.0.6 database. It captured events, but didn't enqueue. My understanding of the difference of Capture creation between using DBMS_CAPTURE_ADM package and DBMS_STREAMS_ADM package is that the former way needs the DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION (or in schema, global level) procedure. Is there other differences? Any idea why the Capture process won't enqueue? Any help is appreciated.
To implement a streams on the same database, just skip all about propagation. Only one setup queue is done and the apply process queue name is the capture process queue name. I used this technique to setup a Streams to MQ series:
[http://www.smenu.org/tutorial_streams_14_MQ.html|http://www.smenu.org/tutorial_streams_14_MQ.html] -
Several captures processes per queue
Hi everybody,
I've been trying out stream configuration on 10gR2 according to the tutorial of Sanjay Mishra on OTN and ... everything works great.
But...
I've tried to configure several applies processes reading from one queue, and several capture sending data to one single queue, and I can't get it to work. Is there something special to take care of - or is it supposed to be so ?
Thanks in advance - NicolasHi Nicolas,
If all the tables are stored on the same source database, you can create only 1 capture process and use the rules to filter the tables you'd like to include in your replication.
For example, let's suppose you have a schema called 'HR' and you would like to capture the changes for tables TAB1 and TAB2 only.
After setting your queue you have to follow these steps:
1) Create a rule set
begin
dbms_rule_adm.create_rule_set(rule_set_name => 'capture_ruleset',
evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT');
end;
2) Create the rules for TAB1 and TAB2
begin
dbms_rule_adm.create_rule(rule_name =>'TAB1_RULE',
condition => ':dml.get_object_owner()=''HR'' and :dml.get_object_name()=''TAB1''');
dbms_rule_adm.create_rule(rule_name =>'TAB2_RULE',
condition => ':dml.get_object_owner()=''HR'' and :dml.get_object_name()=''TAB2''');
end;
3) Add the rules to your rule set
begin
dbms_rule_adm.add_rule(rule_set_name=>'CAPTURE_RULESET',rule_name=>'TAB1_RULE');
dbms_rule_adm.add_rule(rule_set_name=>'CAPTURE_RULESET',rule_name=>'TAB2_RULE');
end;
4) Create the capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'streams_queue',
capture_name => 'my_capture',
rule_set_name => 'capture_ruleset',
start_scn => NULL,
source_database => NULL,
use_database_link => false,
first_scn => NULL);
END;
And that's it!
If you need to add another table, you just create a new rule and add it to your capture rule set. Also,don't forget to prepare the table for instantiation and to setup supplemental logging.
Let me know if you have any questions,
Aldo
Maybe you are looking for
-
Need info on badi or exit before creation of Material document number?
hi experts, we have two systems one is general R/3(4.6 C) and another system is GTS (NW 2004)system. while creation of GR a check from R/3 is made to GTS through GTS plug in installed in R/3 and after which the document number is created... now i nee
-
RT Shared Variable Engine stops publishing
I have an RT system (PXI-8106) that hosts quite a few network published shared variables (NPSVs) for communicating to a Host PC. My Host and RT executables have been functioning fine on their respective systems for weeks, but just recently, I started
-
You need to create more buy in for this to work
As opposed as I am to lifetime subscriptions (creative cloud, MS Office, virus scanners, etc.), I can see some value in them as long as they bring value to me. I have no doubt in my mind that I can continue to work unhindered permanately with Adobe
-
Siri doesn't work when held next to head as advertised, how do I fix this?
I just bought the iPhone 4s. It's said that Siri will work when held next to head. How do I get this feature to work?
-
Preview in browser fails with iDisk
Hi, I have my Mac set to use a local copy of my iDisk to store all of my documents. I open my site from the iDisk icon in Finder. Everything works well and I can access, edit and publish my site with no problems. However when I attempt to preview my