Scripting the capture process in Vivado 2015.1 ILA using the tcl
Hello,
I'm wondering how can I use the TCL script in order to automate the capturing process.
The way that it works so far is that I have to change the condition manually through the GUI, start trigrring, waiting for it to upload the waveform (I really would like to bypass this step!) and then downloading the captured data in .csv file. I've tried the attached script to automate two captures, it didn't work though :( I got same results for test0.csv and test1.csv
any ideas?!
set_property TRIGGER_COMPARE_VALUE eq5'u1 [get_hw_probes state_reg__0 -of_objects [get_hw_ilas hw_ila_1]]
wait_on_hw_ila hw_ila_1
run_hw_ila hw_ila_1
wait_on_hw_ila hw_ila_1
write_hw_ila_data -csv_file d:/pss/test0.csv [current_hw_ila_data]
set_property TRIGGER_COMPARE_VALUE eq5'u10 [get_hw_probes state_reg__0 -of_objects [get_hw_ilas hw_ila_1]]
wait_on_hw_ila hw_ila_1
run_hw_ila hw_ila_1
wait_on_hw_ila -timeout 0 hw_ila_1
write_hw_ila_data -csv_file d:/pss/test1.csv [current_hw_ila_data]
Hello Pratham,
I have tried that before. it works perfectly saving one shot of the ILA. However if you just copy and past, it will fail for the 2nd captured data. (it will save the same thing).
What I'm particularly looking is saving more than one sample, let's say I would like to automate for 1000 captures.
The problem that I've encountered follows as this:
When you try to capture more than one sample, it doesn't stop the trigger and when it does it will save the same thing. in order to stop it it needs to upload the waveform (which is really time consuming and I don't need the tool to upload it in order to captuer the .csv data!) so it will slow down the capturing process.
In ChipScope it was very easy to do that! there was an option called repetitive trigger on. and it was easily capturing down repeatedly with log1, log2, ... and so forth. I'm looking for that!
Similar Messages
-
Vivado 2015.1 PS7-GMII EMIO broken. Solution inside!
Hi,
I have found a huge bug in Vivado 2015.1 when using PS7 GMII on EMIO in a BD design.
It is impossible to use the PS7 ENET with routing the GMII through EMIO. The problem is thet ENET0_GMII_TXD ENET0_GMII_TXEN and ENET0_GMII_TXER is permanently set to Ground.
Solution:
The following file defines the ps7 wrapper when the ps7 instance is created in the BD
$(XILINX_INSTALL_DIR)Vivado/2015.1/data/ip/xilinx/processing_system7_v5_5/ttcl/processing_system7.ttcl
In this file the ENET GMII TX signals are *REMOVED*!!! (They are commented out.)
So I activated the signals and ethernet is working again.
I have create a patch (see below) which shows the problem.
So Xilinx, is any reasonable explanation out there for this? I guess a lot of mainboards requiere an EMIO Ethernet configuration.
--- processing_system7.ttcl 2015-05-20 13:42:34.978734005 +0200
+++ processing_system7.ttcl.org 2015-04-22 07:30:05.000000000 +0200
@@ -1070,8 +1070,8 @@
wire [11:0] M_AXI_GP1_RID_FULL;
-wire ENET0_GMII_TX_EN_i;
-wire ENET0_GMII_TX_ER_i;
+//wire ENET0_GMII_TX_EN_i;
+//wire ENET0_GMII_TX_ER_i;
reg ENET0_GMII_COL_i;
reg ENET0_GMII_CRS_i;
@@ -1655,8 +1655,8 @@
always @(posedge ENET0_GMII_TX_CLK)
begin
ENET0_GMII_TXD <= ENET0_GMII_TXD_i;
- ENET0_GMII_TX_EN <= ENET0_GMII_TX_EN_i;
- ENET0_GMII_TX_ER <= ENET0_GMII_TX_ER_i;
+ ENET0_GMII_TX_EN <= 1'b0; //ENET0_GMII_TX_EN_i;
+ ENET0_GMII_TX_ER <= 1'b0;//ENET0_GMII_TX_ER_i;
ENET0_GMII_COL_i <= ENET0_GMII_COL;
ENET0_GMII_CRS_i <= ENET0_GMII_CRS;
end
@@ -3134,9 +3134,9 @@
.DMA3RSTN (DMA3_RSTN ),
.EMIOCAN0PHYTX (CAN0_PHY_TX ),
.EMIOCAN1PHYTX (CAN1_PHY_TX ),
- .EMIOENET0GMIITXD (ENET0_GMII_TXD_i ),
- .EMIOENET0GMIITXEN (ENET0_GMII_TX_EN_i),
- .EMIOENET0GMIITXER (ENET0_GMII_TX_ER_i),
+ .EMIOENET0GMIITXD (), // (ENET0_GMII_TXD_i ),
+ .EMIOENET0GMIITXEN (), // (ENET0_GMII_TX_EN_i),
+ .EMIOENET0GMIITXER (), // (ENET0_GMII_TX_ER_i),
.EMIOENET0MDIOMDC (ENET0_MDIO_MDC),
.EMIOENET0MDIOO (ENET0_MDIO_O ),
.EMIOENET0MDIOTN (ENET0_MDIO_T_n ),
Does not work for me.
I re-run my 15.1 project in 15.2 and silll getting ENET1_TX pins tied to GND.
Tried to re-generate block design, unset/set ENET1 - no effect.
Tried to apply patch for 15.1 (added directory in D:\Xilinx\Vivado\2015.1\patches\AR64531_Vivado_2015_1_preliminary_rev1) then run in 15.1 - no effect
I noticed that in processing_system7_v5_5_processing_system7 parameter C_EN_EMIO_ENET1 = 0, which blocks TX ports connection.
Help needed.
-
A script that captures the coordinates of the mouse clicks and saves them into a file
Hello,
I'm trying to create a cartoon taking a movie (I've chosen blade runner) as base. I've got the real movie and I've exported all the pictures using VirtualDUB. Now I have a lot of images to modify. I would like to modify the actors faces with the faces generated by Facegen modeller. I'm thinking how to make the whole process automatic because I have a lot of images to manage. I've chosen to use Automate BPA,because it seems the best for this matter. I'm a newbie,so this is my first attempt using Adobe Photoshop and Automate BPA. I wrote a little script. It takes a face generated with Facegen modeller and tries to put it above the original actors faces. But it doesn't work very good and I'm not really satisfied,because the process is not fully automated. To save some time I need to write a script that captures the coordinates of the mouse when I click over the faces and that saves them into a file,so that Automate BPA can read these coordinates from that file and can put the face generated with Facegen Modeller above the original face. I think that Automate BPA is not good for this matter. I think that two coordinates are enough,X and Y. They can be the coordinates of the nose,because it is always in the middle of every face. It is relevant to knows how big should be the layer of the new face,too. This is the Automate BPA code that I wrote :
<AMVARIABLE NAME="nome_foto" TYPE="TEXT"></AMVARIABLE>
<AMVARIABLE NAME="estensione_foto" TYPE="TEXT"></AMVARIABLE>
<AMSET VARIABLENAME="nome_foto">br</AMSET>
<AMSET VARIABLENAME="estensione_foto">.jpeg</AMSET>
<AMVARIABLE NAME="numero_foto" TYPE="NUMBER"></AMVARIABLE>
<AMVARIABLE NAME="coord_x" TYPE="NUMBER"></AMVARIABLE>
<AMVARIABLE NAME="coord_y" TYPE="NUMBER"></AMVARIABLE>
<AMWINDOWMINIMIZE WINDOWTITLE="Aggiungere_layer - AutoMate BPA Agent
Task Builder" />
<AMWINDOWMINIMIZE WINDOWTITLE="AutoMate BPA Server Management Console
- localhost (Administrator)" AM_ONERROR="CONTINUE" />
<AMENDPROCESS PROCESS="E:\Programmi_\Adobe Photoshop
CS5\Photoshop.exe" AM_ONERROR="CONTINUE" />
<AMRUN FILE="%"E:\Programmi_\Adobe Photoshop CS5\Photoshop.exe"%" />
<AMPAUSE ACTION="waitfor" SCALAR="15" />
<AMSENDKEY>{CTRL}o</AMSENDKEY>
<AMPAUSE ACTION="waitfor" SCALAR="1" />
<AMINPUTBOX RESULTVARIABLE="numero_foto">Inserire numero FOTO di
partenza -1</AMINPUTBOX>
<AMINCREMENTVARIABLE RESULTVARIABLE="numero_foto" />
<AMPAUSE ACTION="waitfor" SCALAR="1" />
<AMMOUSEMOVEOBJECT WINDOWTITLE="Apri" OBJECTNAME="%nome_foto &
numero_foto & estensione_foto%" OBJECTCLASS="SysListView32"
OBJECTTYPE="ListItem" CHECKOBJECTNAME="YES" CHECKOBJECTCLASS="YES"
CHECKOBJECTTYPE="YES" />
<AMMOUSECLICK CLICK="double" />
<AMPAUSE ACTION="waitfor" SCALAR="10" />
<AMSENDKEY>{CTRL}+</AMSENDKEY>
<AMPAUSE ACTION="waitfor" SCALAR="20" />
<AMSENDKEY>l</AMSENDKEY>
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="429" MOVEY="281" RELATIVETO="screen" />
<AMMOUSECLICK />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="659" MOVEY="281" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="659" MOVEY="546" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="429" MOVEY="546" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="429" MOVEY="281" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMSENDKEY>v</AMSENDKEY>
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK CLICK="hold_down" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="131" MOVEY="99" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSEMOVE MOVEX="99" MOVEY="162" RELATIVETO="screen" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMMOUSECLICK CLICK="release" />
<AMPAUSE ACTION="waitfor" SCALAR="2" />
<AMINPUTBOX RESULTVARIABLE="coord_x">Inserire coordinata X</AMINPUTBOX>
<AMINPUTBOX RESULTVARIABLE="coord_y">Inserire coordinata Y</AMINPUTBOX>
<AMMOUSEMOVE MOVEX="200" MOVEY="200" RELATIVETO="screen" />
<AMMOUSECLICK CLICK="hold_down" />
<AMMOUSEMOVE MOVEX="%coord_x%" MOVEY="%coord_y%" RELATIVETO="position" />
<AMMOUSECLICK />
and this is a short video to explain better what I want to do :
http://www.flickr.com/photos/26687972@N03/5331705934/
In the last scene of the video you will see the script asking to input the X and the Y coordinates of the nose. This request is time consuming. For this reason I want to write a script that captures automatically the coordinates of the mouse clicks. The only thing to do should be click over the nose and the script should make the rest. As "c.pfaffenbichler" suggested here : http://forums.adobe.com/thread/775219, I could explore 3 ways :
1) use the Color Sampler Tool’s input with a conventional Photoshop Script.
2) use After Effects would provide a better solution.
3) Photoshop’s Animation Panel might also offer some easier way as it might be possible to load two movies (or one movie and one image) and animate the one with the rendered head in relation to the other.
Since I'm a totally newbie in graphic and animation,could you help me to explore these ways ? Thanks for your cooperation.These are the coordinates of the contours of the face that you see on the picture. Can you explain to me how they are calculated ? The coordinates of the first colums are intuitive,but I'm not able to understand how are calculated the coordinates of the second one.
Thanks.
1 COL 2 COL (how are calculated these values ?)
307.5000 182.0000 m
312.5000 192.0000 l
321.5000 194.0000 l
330.5000 193.0000 l
335.0000 187.0000 l
337.0000 180.5000 l
340.0000 174.0000 l
338.5000 165.5000 l
336.0000 159.0000 l
331.5000 153.0000 l
324.5000 150.0000 l
317.0000 154.0000 l
312.5000 161.0000 l
309.0000 173.0000 l
307.5000 182.0000 l
Message was edited by: LaoMar -
The (stopped) Capture process & RMAN
Hi,
We have a working 1-table bi-directional replication with Oracle 10.2.0.4 on SPARC/Solaris.
Every night, RMAN backs up the database and collects/removes the archive logs (delete all inputs).
My understanding from (Oracle Streams Concept & Administration) is that RMAN will not remove an archived log needed by a capture process (I think for the logminer session).
Fine.
But now, If I stop the Capture process for a long time (more than a day), whatever the reason.
It's not clear what is the behaviour...
I'm afraid that:
- RMAN will collect the archived logs (since there is no more logminer session because of the stopped capture process)
- When I'll restart the capture process, it will try to start from the last known SCN and the (new) logminer session will not find the redo logs.
If that's correct, is it possible to restart the Capture process with an updated SCN so that I do not run into this problem ?
How to find this SCN ?
(In the case of a long interruption, we have a specific script which synchronize the table. It would be run first before restarting the capture process)
Thanks for your answers.
JDRMAN backup in 10g is streams aware. It will not delete any logs that contain the required_checkpoint_scn and above. This is true only if the capture process is running in the same database(local capture) as the RMAN backup is running.
If you are using downstream capture, then RMAN is not aware of what logs that streams needs and may delete those logs. One additional reason why logs may be deleted is due to space pressure in flash recovery area.
Please take a look at the following documentation:
Oracle® Streams Concepts and Administration
10g Release 2 (10.2)
Part Number B14229-04
CHAPTER 2 - Streams Capture Process
Section - RMAN and Archived Redo Log Files Required by a Capture Process -
(urgent)how to run the sqlldr script in owb process flow?
dear all:
In my oracle warehouse ,i have to load much *.dat file
into database with sqlldr in owb process flow. In owb process flow, I use the external process to run the sqlldr file with following configuration:
1:======external process==========
command : /app/ftpfile/sqlldr2.sh
parameter list:
success_threshold:0
script:
================================
2:create a file location in FILEãLOCATION node:
=============
ODS_LOCAL_LOC
=============
3: in the runtime repository i register the location
============
user name: oracle (for the sqlldr should run in oracle user)
password : oracle
host name: localhost
root path: /app/ftpfile/
============
4:configure the process flow
============
path settings
working locations:ods_local_loc
============
after deploy them success in runtime repository,
i run it ,it show me error following:
==========
SQL*Loader-704: Internal error: ulconnect: OCIServerAttach [0]
ORA-12545: Connect failed because target host or object does not exist
===========
please help me!
with best regard!Hello,
our developers were getting this error code just the other day. They are using "sqlplus_exec_template" script to initiate these things. In our case, I had to do two thing:
1) Modify their "initiator" script (the one that connects to runtime access user, and then calls "template") - it has to use tns connectivity "user/passwd@service_name"
2) Create TNS entry (server side) for the "service_name" above.
Now these SQL*LOADER mappings run successfully.
Alex. -
Rman-08137 can't delete archivelog because the capture process need it
When I use the rman utility to delete the old archivelog on the server ,It shows :Rman-08137 can't delete archivelog because the capture process need it .how to resolve the problem?
It is likely that the "extract" process still requires those archive logs, as it is monitoring transactions that have not yet been "captured" and written out to a GoldenGate trail.
Consider the case of doing the following: ggsci> add extract foo, tranlog, begin now
After pressing "return" on that "add extract" command, any new transactions will be monitored by GoldenGate. Even if you never start extract foo, the GoldenGate + rman integration will keep those logs around. Note that this GG+rman integration is a relatively new feature, as of GG 11.1.1.1 => if "add extract foo" prints out "extract is registered", then you have this functionality.
Another common "problem" is deleting "extract foo", but forgetting to "unregister" it. For example, to properly "delete" a registered "extract", one has to run "dblogin" first:
ggsci> dblogin userid <userid> password <password>
ggsci> delete extract foo
However, if you just do the following, the extract is deleted, but not unregistered. Only a warning is printed.
ggsci> delete extract foo
<warning: to unregister, run the command "unregister...">
So then one just has to follow the instructions in the warning:
ggsci> dblogin ...
ggsci> unregister extract foo logretention
But what if you didn't know the name of the old extracts, or were not even aware if there were any existing registered extracts? You can run the following to find out if any exist:
sqlplus> select count(*) from dba_capture;
The actual extract name is not exactly available, but it can be inferred:
sqlplus> select capture_name, capture_user from dba_capture;
<blockquote>
CAPTURE_NAME CAPTURE_USER
================ ==================
OGG$_EORADF4026B1 GGS
</blockquote>
In the above case, my actual "capture" process was called "eora". All OGG processes will be prefixed by OGG in the "capture_name" field.
Btw, you can disable this "logretention" feature by adding in a tranlog option in the param file,
TRANLOGOPTIONS LOGRETENTION DISABLED
Or just manually "unregister" the extract. (Not doing a "dblogin" before "add extract" should also work in theory... but it doesn't. The extract is still registered after startup. Not sure if that's a bug or a feature.)
Cheers,
-Michael -
How to use the Logic Analyser layout in Vivado 2015.1
Hi,
I found this quite annoying after upgrading to vivado 2015.1. Whenever I use debug signal, it will bring me to the new logic analyser layout as shown below after bitstream is downloaded. The tiny waveform window, however, is barely useful. So I have to maximize the waveform window every time after downloading, which will disable the trigger window. But then when I want to add trigger probes, I couldn't find an easy way to get to the trigger setup window again, so I have to reset dashboard, then set trigger, then maximize the waveform window again... Can anyone send me some pointer of how to efficiently use the new layout. Also, the scroll bar of the signal name in the waveform window is not available any more. This is also annoying as my signal names are quite long and I have to make sure the panel is wide enough to show the name, otherwises it will be like /topmodule/submodule/....._V.
Jimmy
Hi Lior,
The scroll bar is replaced with a feature (Elide Settings) that shortens the name of the probes to fit into the column size you select.
If there’s enough space in the column, obviously its setting has no effect, and you see the entire probe name.
If there’s not enough space, then based on this elide setting, the probe name will fit in the column either from the beginning, middle, or end of the probe name (see the attached image).
This way it's easier for the user to see the portion of the probe names that they need without having to scroll left or right.
This settings is inside the waveform option on the left side of the waveform viewer as you see in the images.
Hope this helps. -
Limit the Capture process to just INSERTS
Hi,
Source: 10.2.0.3
Downstream Capture DB: 10.2.0.3
Destination DB: 11.1.0.7
Is it possible to limit the Streams Capture process to only include INSERTS? We are only interested in INSERTS into the table and are not concerned with capturing any updates or deletes that are performed against the table.
When configuring the capture and apply I've set:
include_dml => true,
Is it possible to have the capture and apply process run at a finer granuality and just capture and apply the INSERTS that have been performed against the source database tables?
Thanks in advance.Go to Morgan's Library at www.psoug.org and look up DBMS_STREAMS_ADM.
Scroll down to where the demo shows "and_condition => ':lcr.get_command_type() != ''DELETE''');"
That should point you in the right direction. -
Is it possible to move some of the capture processes to another rac node?
Hi All,
Is it possible to move some of the ODI (Oracle Data Integrator) capture processes running on node1 to node2. Once moved does it work as usual or not? If its possible please provide me with steps.
Appreciate your response
Best Regards
SK.Hi Cezar,
Thanks for your post. I have a related question regarding this,
Is it really necessary to have multiple capture and multiple apply processes? One for each schema in ODI? Because if set to automatic configuration, ODI seems to create a capture and a related apply process for each schema, which I guess leads to our specific performance problem (high cpu etc) I mentioned in my other post: Re: Is it possible to move some of the capture processes to another rac node?
Is there way to use just one capture and one apply process for all of the schemas in ODI?
Thanks a million.
Edited by: oyigit on Nov 6, 2009 5:31 AM -
Internal Error when creating Capture Process
Hi,
I get the following when trying to create my capture process:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
2 3 queue_table => 'capture_queue_table',
queue_name => 'capture_queue');
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'apply_queue_table',
queue_name => 'apply_queue');
END;
4 5 6 7 8 9 10 11
BEGIN
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcbgtcr_4], [32492], [0], [1], [],
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 408
ORA-06512: at line 2
Any ideas?
Cheers,
WarrenMake sure that you have upgraded to the 9.2.0.2 patchset and, as part of the migration to 9202, that you have run the catpatch.sql script.
-
Opening xapp1082 with Vivado 2015.2 gives errors (pl_eth)
After sourcing pl_eth.tcl I get the first problem, I had to update manually (in pl_eth_bd.tcl) the AXI ethernet version from 6.2 to 7.0.
Fixed this problem I get the following:
INFO: [Device 21-403] Loading part xc7z045ffg900-2 Wrote : </home/parallels/vivado/xapp1082_2014_4/hardware/vivado/runs_pl_eth/pl_eth_sfp.srcs/sources_1/bd/design_pl_eth/ip/design_pl_eth_axi_ethernet_0/bd_0/bd_0.bd>
Wrote : </home/parallels/vivado/xapp1082_2014_4/hardware/vivado/runs_pl_eth/pl_eth_sfp.srcs/sources_1/bd/design_pl_eth/ip/design_pl_eth_axi_ethernet_0/bd_0/bd_0.bd>
WARNING: [PS7-6] The applied preset does not match with board preset. You may not get expected settings for board. The ZC706 preset is designed for ZC706 board.
ERROR: [BD 41-80] Exec TCL: Specified object '' does not exist. Please use an existing object name
ERROR: [BD 5-14] Error: running create_bd_segment.
ERROR: [Common 17-39] 'create_bd_addr_seg' failed due to earlier errors. while executing "create_bd_addr_seg -range 0x40000 -offset 0x43C00000 [get_bd_addr_spaces processing_system7/Data] [get_bd_addr_segs axi_ethernet/s_axi/Reg] SEG_axi_et..." (procedure "create_root_design" line 103) invoked from within "create_root_design """ (file "pl_eth_bd.tcl" line 300) while executing "source pl_eth_bd.tcl" (file "pl_eth.tcl" line 21) set_property source_mgmt_mode DisplayOnly [current_project]I went through the annoying process of opening the xapp1082 design using Vivado 2014.4 (I sourced pl_eth.tcl), then I opened the resulting project using Vivado 2015.2 with no luck. The "upgrade" process got stuck while upgrading one of the IP.
Result is that the bitstream cannot be generated.
Anyone can point in the right direction?
Furthermore, why Vivado is not provided as a web application? Like Google Docs, I mean. This would avoid pushing gigabytes of software to customers. I would pay for such a service.
Regards,
Antonio. -
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
Dtrace script to capture time spent executing shell commands
Hi all,
My 1st dtrace script to capture time spent by oracle executing various commands, is this correct? It seems to work...
#!/usr/bin/sh
/usr/sbin/dtrace -n '
#pragma D option quiet
#pragma D option switchrate=10
syscall::exec:entry, syscall::exece:entry
/uid == 900/
self->t = timestamp;
syscall::exec:return, syscall::exece:return
/uid == 900/
printf("%-20d %s\n", (timestamp - self->t), curpsinfo->pr_psargs);
thanks for any feedback.
Regards
StuartHi Stuart -
Just to be clear, you wanted to know the time Oracle takes to exec() a command, or to actually run a short-lived process from start to finish? The script accomplishes the former, but I'm curious why you'd want that particular value.
Michael -
Excise invoice capture process
Hi,
I want to know about excise invoice capture process for depot plant which t. cod eis use for depot plant how to do the part1 and part2 and also reversal process for the same.
also what is diff. between excis einvoice capture process for depot and non depot plant.
regards,
zafarHi Zafar,
There are no part 1 and part 2 in RG23D for depot scenario. You can update RG23D at the time of MIGO or J1IG "Capture excise invoice for depot".
For cancelling you can use the same transaction. And to send the goods out from Depot plant use T-code J1IJ for updating RG23D.
Rest process remains the same Extraction J2I5 and print through J2I6.
BR -
Instantiation and start_scn of capture process
Hi,
We are working on stream replication, and I have one doubt abt the behavior of the stream.
During set up, we have to instantiate the database objects whose data will be transferrd during the process. This instantiation process, will create the object at the destination db and set scn value beyond which changes from the source db will be accepted. Now, during creation of capture process, capture process will be assigned a specific start_scn value. Capture process will start capturing the changes beyond this value and will put in capture queue. If in between capture process get aborted, and we have no alternative other than re-creation of capture process, what will happen with the data which will get created during that dropping / recreation procedure of capture process. Do I need to physically get the data and import at the destination db. When at destination db, we have instantiated objects, why not we have some kind of mechanism by which new capture process will start capturing the changes from the least instantiated scn among all instantiated tables ? Is there any other work around than exp/imp when both db (schema) are not sync at source / destination b'coz of failure of capture process. We did face this problem, and could find only one work around of exp/imp of data.
thanx,Thanks Mr SK.
The foll. query gives some kind of confirmation
source DB
SELECT SID, SERIAL#, CAPTURE#,CAPTURE_MESSAGE_NUMBER, ENQUEUE_MESSAGE_NUMBER, APPLY_NAME, APPLY_MESSAGES_SENT FROM V$STREAMS_CAPTURE
target DB
SELECT SID, SERIAL#, APPLY#, STATE,DEQUEUED_MESSAGE_NUMBER, OLDEST_SCN_NUM FROM V$STREAMS_APPLY_READER
One more question :
Is there any maximum limit in no. of DBs involved in Oracle Streams.
Ths
SM.Kumar
Maybe you are looking for
-
Blackberry calendar will not sync all appointments to MS Outlook 2007
Hello All, Hardware and Software background: BB Storm 2 9550 registered on the the Verizon Wireless Network ("VZW") OS of the device: Version 5.0.0.105 (official OS from VZW) Desktop OS: MS Windows 7 Ultimate Premium Problem: I am trying to sync all
-
Fork Step in BPM scenario PI 7.1
Hi All, I have a requirement where in i will have to collect 3 different messages coming from the source and then combine these 3 messages to a single message and sent it to Target System . To achieve this i came up with a BPM where in fork step was
-
Exported movie not playing properly
I have exported a sequence from FCE "as a Quicktime Movie" and put it onto a DVD using iDVD. However, when I try to play the DVD in my DVD player connected to my t.v., it plays, but not properly. It pauses every few seconds, as if the disk or the pla
-
I have adobe creative cloud how do I get adobe muse
I have adobe creative cloud how do I get adobe muse. Do I have to but adobe muse separatly or does it come with adobe creative cloud. if so can some one tell me how to download it
-
Flash Player Plugin between 11.0 and 11.7.700.169 (click-to-play) has been blocked for your protection.this is what firefox tells me but when i check the add-ons manager it says version 15.0.0.239 is installed.how can i get firefox to recognize the r