Resetting SCN from removed Capture Process
I've come across a problem in Oracle Streams where the Capture Processes seem to get stuck. There are no reported errors in the alert log and no trace files, but the capture process fails to continue capturing changes. It stays enabled, but in an awkward state where the OEM Console reports zeros across the board (0 messages, 0 enqueued), when in fact there had been accurate totals in the past.
Restarting the Capture process does no good. The Capture process seems to switch its state back and forth from Dictionary Initialization to Initializing and vice versa. The only thing that seems to kickstart Streams again is to remove the Capture process and recreate the same process.
However my problem is that I want to set the start_scn of the new capture process to the captured_scn of the remove capture process so that the new one can start from where the old one left off? However, I'm getting an error that this cannot be performed (cannot capture from specified SCN).
Am I understanding this correctly? Or should the new Capture process start from where the removed left off automatically?
Thanks
Hi,
I seem to have the same problem.
I now have a latency of round about 3 days while nothing happened in the database so I want to be able to set the capture process to a further SCN. Setting the Start_SCN gives me an error (can't remember it now unfortunately). Somethimes it seems that the capture process gets stuck in an archived log. It then takes a long time for it to go further and when it goes further it sprints through a bunch of logs before it gets stuck again. During that time all the statuses look good, no heavy cpu-usage is monitored. We saw that the capture-builder has the highest cpu-load, where I would expect the capture-reader to be busy.
I am able to set the first_scn. So a rebuild of the logminer dictionary might help a bit. But then again: why would the capture process need such a long time to process the archived-logs where no relevant events are expected.
In my case the Streams solution is considered as a candidate for a replication solution where Quest's Sharedplex is considered to be expensive and unable to meet the requirements. One main reason it is considered inadaquate is that it is not able to catch up after a database-restart of a heavy batch. Now it seems that our capture-process might suffer from the same problem. I sincerly hope I'm wrong and it proofs to be capable.
Regards,
Martien
Similar Messages
-
Good way to map SCN = LCR in Capture process?
How does one determine which SCN a LCR comes from? I found a number of undocumented yet potentially relevant views (v$LOGMNR_TRANSACTION, x$lcr, etc), but... well, being undocumented I have to make a number of guesses. grin
We are attempting to disable the Propagation/apply process once a specific SCN has been applied. If you have another suggestion how to accomplish this, please let me know. I have been trying to make my way through the various online documents, but as you probably are well aware of, there is a lot of information to digest.
TIAThanks. The tip about dbms_apply_adm is a good one, and I am looking more into it. Having trouble with dba_apply_progress, though. Or, more accurately, having trouble finding a captured SCN that I trust. dba_capture on the downstream database (not target) shows me a lower number for captured_scn than LAST_ENQUEUED_SCN. Based on what I see in the target, I am leaning towards trusting the enqueued_scn and dba_apply_progress.APPLIED_MESSAGE_NUMBER.
-
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
Capture process status waiting for Dictionary Redo: first scn....
Hi
i am facing Issue in Oracle Streams.
below message found in Capture State
waiting for Dictionary Redo: first scn 777777777 (Eg)
Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
i have space related issue....
i restored the archive log to another partition eg. /opt/arc_log
what should i do
1) db start reading archive log from above location
or
2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
Regard'sHi -
Bad news.
As per note 418755.1
A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
Rgds
Mark Teehan
Singapore -
Oralce streamsSystem Change Number (SCN) and capture process
Do we have get SCN before capture process is started ? If yes , from where does the replication is started .
WIll replication process startright from the time when SCN is captured
OR
will replicaion start from the time when the capture process is started
Edited by: [email protected] on Mar 26, 2009 6:04 PMI am trying to setup oracle streams to enable replication for a set of tables.
One of the step as per the doc is is to setup/get SCN and its acheived by the following peice of code.
CONNECT STRMADMIN/STRMADMINPW@<CONNECT_STRING_SOURCE>
DECLARE
V_SCN NUMBER;
BEGIN
V_SCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DB_LINK_TARGET_DB(
SOURCE_OBJECT_NAME => '<SCOTT.EMP>',
SOURCE_DATABASE_NAME => 'SOURCE_DATABASE',
INSTANTIATION_SCN => V_SCN);
END;
STRMADMIN : Is a genenice user account (streams administrator) to manage oracle streams. -
On my mac, photoshop cc "save as" removes capture date from exif data. How do I prevent this?
You had to reinstall CS6 after cancelling the CC as both were from the same Adobe ID. You can always use & keep both CC & CS6 together.
Please refer to the blog:
Can I install both CS6 and CC apps on my computer? « Caveat Lector
Other references are :
Creative Cloud Help | Install, update, or uninstall apps
What is the difference CS6 & CC Versions?
Regards
Rajshree -
CAPTURE process error - missing Archive log
Hi -
I am getting cannot open archived log 'xxxx.arc' message when I try to start a newly created capture process. The archive files have been moved by the DBAs.
Is there a way to set the capture process to start from a new archive ?
I tried
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE ( capture_name => 'STRMADMIN_SCH_CAPTURE', start_scn =>9668840362577);
I got the new scn from DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();.
But I still get the same error.
Any ideas ?
Thanks,
SadeepaIf you are on 9i, I know that trying to reset the scn that way won't work. You have to drop and recreate the capture process. You can leave all the rules and rulesets in place, but I think you have to prepare all of the tables again.
-
The (stopped) Capture process & RMAN
Hi,
We have a working 1-table bi-directional replication with Oracle 10.2.0.4 on SPARC/Solaris.
Every night, RMAN backs up the database and collects/removes the archive logs (delete all inputs).
My understanding from (Oracle Streams Concept & Administration) is that RMAN will not remove an archived log needed by a capture process (I think for the logminer session).
Fine.
But now, If I stop the Capture process for a long time (more than a day), whatever the reason.
It's not clear what is the behaviour...
I'm afraid that:
- RMAN will collect the archived logs (since there is no more logminer session because of the stopped capture process)
- When I'll restart the capture process, it will try to start from the last known SCN and the (new) logminer session will not find the redo logs.
If that's correct, is it possible to restart the Capture process with an updated SCN so that I do not run into this problem ?
How to find this SCN ?
(In the case of a long interruption, we have a specific script which synchronize the table. It would be run first before restarting the capture process)
Thanks for your answers.
JDRMAN backup in 10g is streams aware. It will not delete any logs that contain the required_checkpoint_scn and above. This is true only if the capture process is running in the same database(local capture) as the RMAN backup is running.
If you are using downstream capture, then RMAN is not aware of what logs that streams needs and may delete those logs. One additional reason why logs may be deleted is due to space pressure in flash recovery area.
Please take a look at the following documentation:
Oracle® Streams Concepts and Administration
10g Release 2 (10.2)
Part Number B14229-04
CHAPTER 2 - Streams Capture Process
Section - RMAN and Archived Redo Log Files Required by a Capture Process -
Instantiation and start_scn of capture process
Hi,
We are working on stream replication, and I have one doubt abt the behavior of the stream.
During set up, we have to instantiate the database objects whose data will be transferrd during the process. This instantiation process, will create the object at the destination db and set scn value beyond which changes from the source db will be accepted. Now, during creation of capture process, capture process will be assigned a specific start_scn value. Capture process will start capturing the changes beyond this value and will put in capture queue. If in between capture process get aborted, and we have no alternative other than re-creation of capture process, what will happen with the data which will get created during that dropping / recreation procedure of capture process. Do I need to physically get the data and import at the destination db. When at destination db, we have instantiated objects, why not we have some kind of mechanism by which new capture process will start capturing the changes from the least instantiated scn among all instantiated tables ? Is there any other work around than exp/imp when both db (schema) are not sync at source / destination b'coz of failure of capture process. We did face this problem, and could find only one work around of exp/imp of data.
thanx,Thanks Mr SK.
The foll. query gives some kind of confirmation
source DB
SELECT SID, SERIAL#, CAPTURE#,CAPTURE_MESSAGE_NUMBER, ENQUEUE_MESSAGE_NUMBER, APPLY_NAME, APPLY_MESSAGES_SENT FROM V$STREAMS_CAPTURE
target DB
SELECT SID, SERIAL#, APPLY#, STATE,DEQUEUED_MESSAGE_NUMBER, OLDEST_SCN_NUM FROM V$STREAMS_APPLY_READER
One more question :
Is there any maximum limit in no. of DBs involved in Oracle Streams.
Ths
SM.Kumar -
Can stream "capture process" skip an archivelog?
DB: 10.2.0.5, on Windows 2003 SP2 32-bits
A stream capture component in our database is stuck reading one the archive log file, and status in the v$streams_capture view is 'CREATING LCR' . It is not moving at all.
I think, the archivelog is corrupted and will guess skipping from reading the log can help??
Any idea?Find the transaction identifier in the trace file; for example in this trace the transaction is '0x000a.008.00019347'
Convert it from hex to decimal; in this example '0x000a.008.00019347' will be '10.8.103239'.
Example of trace file:
++++++++++++ Dumping Current LogMiner Lcr: +++++++++++++++
++ LCR Dump Begin: 0x000007FF3F75D8A0 - cannot_support
op: 255, Original op: 255, baseobjn: 74480, objn: 74480, objv: 1
DF: 0x00000003, DF2: 0x00000010, MF: 0x08240000, MF2: 0x00000000
PF: 0x00000000, PF2: 0x00000000
MergeFlag: 0x03, FilterFlag: 0x01
Id: 1, iotPrimaryKeyCount: 0, numChgRec: 0
NumCrSpilled: 0
RedoThread#: 1, rba: 0x000604.00014fd2.014c
scn: 0x0000.36a4b03c, (scn: 0x0000.36a4b03c, scn_sqn: 1, lcr_sqn: 0)xid: *0x000a.008.00019347*, parentxid: 0x000a.008.00019347, proxyxid: 0x0000.000.00000000, unsupportedReasonCode: 0,
ncol: 5 newcount: 0, oldcount: 0
LUBA: 0x3.c004eb.8.8.122f2
Filter Flag: UNDECIDED
++ KRVXOA Dump Begin:
Object Number: 74480 BaseObjNum: 74480 BaseObjVersion: 1
Then stop the capture process and execute the following procedure:
exec dbms_capture_adm.set_parameter('your_capture_process_name','_ignore_transaction','your_transaction_id_in_decimal_notation');
Now you can restart the capture process and it will ignore the tx. -
Error running Archived-Log Downstream Capture Process
I have created a Archived-Log Downstream Capture Process with ref. to following link
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
After executing the capture process get following error in trace
============================================================================
Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-194.el5
Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
Machine: x86_64
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 37
Unix process pid: 13572, image: [email protected] (CP01)
*** 2011-08-20 14:21:38.899
*** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
*** CLIENT ID:() 2011-08-20 14:21:38.899
*** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
*** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
*** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
knlcCopyPartialCapCtx(), setting default poll freq to 0
knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
knlcObtainRuleSetNullLock: rule set name
knlcmaInitCapPrc+
knlcmaGetSubsInfo+
knlqgetsubinfo
subscriber name EMP_DEQ
subscriber dblinke name
subscriber name APPLY_EMP
subscriber dblinke name
knlcmaTerm+
knlcmaTermSrvs+
knlcmaTermSrvs-
knlcmaTerm-
knlcCCAInit()+, err = 26802
knlcnShouldAbort: examining error stack
ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
knlcnShouldAbort: examing error 26802
knlcnShouldAbort: returning FALSE
knlcCCAInit: no combined capture and apply optimization err = 26802
knlzglr_GetLogonRoles: usr = 91,
knlqqicbk - AQ access privilege checks:
userid=91, username=STRMADMIN
agent=STRM05_CAPTURE
knlqeqi()
knlcRecInit:
Combined Capture and Apply Optimization is OFF
Apply-state checkpoint mode is OFF
last_enqueued, last_acked
0x0000.00000000 [0] 0x0000.00000000 [0]
captured_scn, applied_scn, logminer_start, enqueue_filter
0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
flags=0
Starting persistent Logminer Session : 13
krvxats retval : 0
CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
krvxssp retval : 0
krvxsda retval : 0
krvxcfi retval : 0
#1: krvxcfi retval : 0
#2: krvxcfi retval : 0
About to call krvxpsr : startscn: 0x0000.0004688c
state before krvxpsr: 0
dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
*** 2011-08-20 14:21:41.810
Begin knlcDumpCapCtx:*******************************************
Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
Capture Name: STRM05_CAPTURE : Instantiation#: 65
*** 2011-08-20 14:21:41.810
++++ Begin KNST dump for Sid: 146 Serial#: 2274
Init Time: 08/20/2011 14:21:38
++++Begin KNSTCAP dump for : STRM05_CAPTURE
Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
Capture_Message_Number: 0x0000.00000000 [0]
Capture_Message_Create_Time: 01/01/1988 00:00:00
Enqueue_Message_Number: 0x0000.00000000 [0]
Enqueue_Message_Create_Time: 01/01/1988 00:00:00
Total_Messages_Captured: 0
Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
Total_Full_Evaluations: 0
Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
Apply_Name :
Apply_DBLink :
Apply_Messages_Sent: 0
++++End KNSTCAP dump
++++ End KNST DUMP
+++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
Capture_Type: DOWNSTREAM
Version:
Source_Database: ORCL2.LOCALDOMAIN
Use_Database_Link: NO
Logminer_Id: 13 Logfile_Assignment: EXPLICIT
Status: ENABLED
First_Scn: 0x0000.0004688c [288908]
Start_Scn: 0x0000.0004688c [288908]
Captured_Scn: 0x0000.0004688c [288908]
Applied_Scn: 0x0000.0004688c [288908]
Last_Enqueued_Scn: 0x0000.00000000 [0]
Capture_User: STRMADMIN
Queue: STRMADMIN.STREAMS_QUEUE
Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
Checkpoint_Retention_Time: 60
+++ End DBA_CAPTURE dump
+++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
PARALLELISM = 1 Set_by_User: NO
STARTUP_SECONDS = 0 Set_by_User: NO
TRACE_LEVEL = 7 Set_by_User: YES
TIME_LIMIT = -1 Set_by_User: NO
MESSAGE_LIMIT = -1 Set_by_User: NO
MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
WRITE_ALERT_LOG = TRUE Set_by_User: NO
DISABLE_ON_LIMIT = FALSE Set_by_User: NO
DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
SPLIT_THRESHOLD = 1800 Set_by_User: NO
MERGE_THRESHOLD = 60 Set_by_User: NO
+++ End DBA_CAPTURE_PARAMETERS dump
+++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
+++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
++ LogMiner Session Dump Begin::
SessionId: 13 SessionName: STRM05_CAPTURE
Start SCN: 0x0000.00000000 [0]
End SCN: 0x0000.00046c2d [289837]
Processed SCN: 0x0000.0004689e [288926]
Prepared SCN: 0x0000.000468d4 [288980]
Read SCN: 0x0000.000468e2 [288994]
Spill SCN: 0x0000.00000000 [0]
Resume SCN: 0x0000.00000000 [0]
Branch SCN: 0x0000.00000000 [0]
Branch Time: 01/01/1988 00:00:00
ResetLog SCN: 0x0000.00000001 [1]
ResetLog Time: 08/18/2011 16:46:59
DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
krvxvtm: Enabled threads: 1
Current Thread Id: 1, Thread State 0x01
Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
Current Session State: 0x20005, Current LM Compat: 0xb200000
Flags: 0x3f2802d8, Real Time Apply is Off
+++ Additional Capture Information:
Capture Flags: 4425
Logminer Start SCN: 0x0000.0004688c [288908]
Enqueue Filter SCN: 0x0000.0004688c [288908]
Low SCN: 0x0000.00000000 [0]
Capture From Date: 01/01/1988 00:00:00
Capture To Date: 01/01/1988 00:00:00
Restart Capture Flag: NO
Ping Pending: NO
Buffered Txn Count: 0
-- Xid Hash entry --
-- LOB Hash entry --
-- No TRIM LCR --
Unsupported Reason: Unknown
--- LCR Dump not possible ---
End knlcDumpCapCtx:*********************************************
*** 2011-08-20 14:21:41.810
knluSetStatus()+{
*** 2011-08-20 14:21:44.917
knlcapUpdate()+{
Updated streams$_capture_process
finished knlcapUpdate()+ }
finished knluSetStatus()+ }
knluGetObjNum()+
knlsmRaiseAlert: keltpost retval is 0
kadso = 0 0
KSV 1304 error in slave process
*** 2011-08-20 14:21:44.923
ORA-01304: subordinate process error. Check alert and trace logs
knlz_UsrrolDes()
knstdso: state object 0xb644b568, action 2
knstdso: releasing so 0xb644b568 for session 146, type 0
knldso: state object 0xa6d0dea0, action 2 memory 0x0
kadso = 0 0
knldso: releasing so 0xa6d0dea0
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01304: subordinate process error. Check alert and trace logs
Any suggestions???Output of above query
==============================
CAPTURE_NAME STATUS ERROR_MESSAGE
STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
Alert log.xml
=======================
<msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
pid='30921'>
<txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
ORA-01304: subordinate process error. Check alert and trace logs
</txt>
</msg>
The orcl_cp01_30921.trc has the same thing posted in the first message. -
Error while invoking ESB process from a BPEL process
Hi all
We have a requirement to call an ESB process from a BPEL process. We are using an adapter wth the ESB's WSDL url. After deploying the BPEL process, the registered ESB is getting called for most values while suddenly some values return the followign error
*"exception on JaxRpc invoke: HTTP transport error: javax.xml.soap.SOAPException: java.security.PrivilegedActionException: javax.xml.soap.SOAPException: Message send failed: Connection reset"*
The catch here is, if we re-run the process with the same values again, the ESB is called successfully! How is it possible for a process to error out and run normally for the same inputs ?
What could be the possible fix for this? I am thankful for any inputs on this.
VijayHi Vijay,
This is a bug and you can refer the metalink note:
"Applying Patch 7445876 Results in Error "java.lang.NullPointerException". [ID 942575.1]" for reference.
Also you can refer the following link:
"http://puchaanirudh.blogspot.com/2008/12/exception-on-jaxrpc-invoke-http.html" also.
Thanks,
Vishwanath. -
Excise invoice capture process
Hi,
I want to know about excise invoice capture process for depot plant which t. cod eis use for depot plant how to do the part1 and part2 and also reversal process for the same.
also what is diff. between excis einvoice capture process for depot and non depot plant.
regards,
zafarHi Zafar,
There are no part 1 and part 2 in RG23D for depot scenario. You can update RG23D at the time of MIGO or J1IG "Capture excise invoice for depot".
For cancelling you can use the same transaction. And to send the goods out from Depot plant use T-code J1IJ for updating RG23D.
Rest process remains the same Extraction J2I5 and print through J2I6.
BR -
How to reset an failed asynchron abap-process in a process-chain?
Hi specialists,
I've got a process-chain which triggers (as one of its steps) an abap-program.
The program should run asychroniously, so the process-chain waits until the programm reports its ending.
When the abap-program fails/crashes/dumps, then the endings is never reported and the process-chain gets stuck.
If I try to re-run that process-chain, it fails again with the message "Variant XYZ still executing from previous run".
What I already found so far is the following note:
The process chain management will not know if your program terminates!
Thus if you re-start the chain, this will also terminate, because the old run would not have finished yet. To avoid this, you need to manually set the process in the chain log view in the process monitor to terminate before you start again.
My problem: I cannot find any possibility to set the process to "terminate" in the chain's log view.
Could you please gimme a hint where I can find that option?
Best regards,
MarcoHi,
You can remove the process chain from scheduling before staring the process chain again.
Also can edit the entries in RSPROCESSLOG table before running the process chain.
Alternatively you can create a custom ABAP process type which handles the success as well as fail cases.
Refer : [How To- Trigger subsequent processes in a Process Chain based on the Success or Failure of ABAP program|/people/balaji.venugopal/blog/2009/04/07/how-to-trigger-subsequent-processes-in-a-process-chain-based-on-the-success-or-failure-of-abap-program]
Hope this helps,
Regards,
anil -
LABVIEW.LIB was not called from a LabVIEW process
Hi All,
I've inherited LV code that calls a CIN node to access a motor controller. I'd like to compile this code to a .NET DLL, but receive the following error when calling it from an external source:
I've read the knowledgebase article explaining the problem from here, as well as the following support questions:
http://forums.ni.com/t5/LabVIEW/Labview-lib-was-not-called-from-a-labview-process/m-p/232548
http://forums.ni.com/t5/LabVIEW/Problem-with-lsb-LABVIEW-LIB-was-not-called-from-a-LabVIEW/m-p/48809...
http://forums.ni.com/t5/LabVIEW/Labview-lib-was-not-calld-from-a-labview-process/m-p/718427
http://forums.ni.com/t5/LabVIEW/Building-a-LabVIEW-DLL-with-VIs-that-use-CINs/m-p/632817
The conclusion seems to be recompiling is the answer. I've tried recompiling the original CIN vi within LV with no success. Do they mean to recompile the original C code against the newer labview.lib (sorry, I'm not all that familiar with how the CIN nodes work)? Any suggestions would be awesome. Thanks.
-JoeYou can't make use of LabVIEW manager functions in non-LabVIEW based processes. Basically unless the C code is for a CIN or DLL that is to be called by LabVIEW (inside the development system or a LabVIEW built application), any function pulled in from labview.lib is not available. LabVIEW.lib is an import library that does not implement any functions but simply imports them from the LabVIEW kernel, either the LabVIEW development system or the LabVIEW runtime DLL. And no you can't just link in the LabVIEW runtime DLL into your .Net application. This DLL needs to be started up and initialized in very specific ways, that only LabVIEW itself knows about when building an application.
Basically if you want to recompile the code (yes in C/C++) for use in a non-LabVIEW application, you also have to remove all the link libraries from the LAbVIEW cintools directory, and replace any use of functions now unavailable (link error: unavailable external reference) with other similar functionality from your C runtime library. Or implement those functions yourself using C runtime library calls.
Another possibility could be to actually create a LabVIEW executable that exports the functionality as ActiveX Server. Or in LabVIEW 2010 you could also select to create a .Net Interop Assembly from inside the LabVIEW project.
Rolf Kalbermatter
CIT Engineering Netherlands
a division of Test & Measurement Solutions
Maybe you are looking for
-
WAS JAVA add-in for ABAP install error on AIX Oracle
Hi All I am currently trying to install a 640 WAS JAVA add-in for ABAP on AIX 5.2 running oracle 9.2.0.6 for SAP 4.7 SR1. I have successfully installed the ABAP stack, however, when I try to install the JAVA add-in, I get an error on step "Start SAP
-
Mac Book Pro will not start up
My 2008 MBP will not start up. When i turn it on, it goes to a gray screen with the Apple Symbol and the wheel below it keeps spinning. ANy suggestions?
-
What is the program name to update vbrp and ce1beka tables
Hi experts, when i do vf01 transaction and i press save then the table ce1beka and the table vbrp is updating.How i can find from which program these tables are updating.Is it possible to check in sql trace?.from where the data is com
-
I have an iPhone 3 G and I want to disable the AT
I want to disable/cancel AT&T service on this phone so it will sell more easily.
-
Doubt about interface & abstract
Hai everybody, I am new java technology. I need clear discribtion about interface & abstract. Please give some examples and differentiate it. i awaiting for your reply by azhar