Oracle stream - Downstream new table setup
Hi,
I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
1) stop apply/capture
2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'NIG.BUILD_VIEWS',
streams_type => 'CAPTURE',
streams_name => 'NIG_CAPTURE',
queue_name => 'STRMADMIN.NIG_Q',
include_dml => true,
include_ddl => true,
source_database => 'PNID.LOUDCLOUD.COM'
END;
3) Import (which will initialize the table
impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
4) start apply/capture
Have you applied this way? What was the result?
regards
Similar Messages
-
I am getting the below error now on downstream database. This constraint doesn't exist on downstream because I am
excluding constraint on downstream
when importing.
ORA-00001: unique constraint (PCAT_NT.PK01_DCS_CAT_CATINFO) violated
Is this mandatory to include constraint in Oracle stream setup?
My steps are as below:
1) Create the downstream capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE (
queue_name => 'STRMPCAT_QUEUE',
capture_name => 'DOWNSTRMPCAT_CAPTURE',
rule_set_name => null,
start_scn => null,
source_database => 'PCAT',
use_database_link => true,
first_scn => null,
logfile_assignment => 'IMPLICIT');
END;
2) Add schema rule on Downstream database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'DOWNSTRMPCAT_CAPTURE',
queue_name => 'STRMPCAT.STRMPCAT_QUEUE',
include_dml => true,
include_ddl => false,
source_database => 'PCAT');
END;
3) Initializing SCN on Downstream database
--on source
expdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp exclude=user,grant,statistics,synonyms,procedure,constraint FLASHBACK_SCN=7620529799615
--on target
impdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp
RegardsAre you stating that the constraint PCAT_NT.PK01_DCS_CAT_CATINFO does not exist on the destination database and that you are getting a constraint violation error on the destination database? That strikes me as exceptionally unlikely.
How do you know that the constraint was not created on the destination database? I would tend to suspect that you created the constraint potentially unintentionally.
I would be interested in why you're getting duplicate rows as well. If there are no duplicate rows in the source database, duplicates in the destination would imply that you're doing something wrong in the setup (i.e. the SCN in your export is incorrect) or that you have something in the Streams config which is causing duplicates (i.e. a custom DML handler).
Justin -
hi ,
I’m trying to configure oracle stream one direction ( tables level )..
my source and destination database is 10.2.0.4 and destination in rac (three nodes)
source database is one node
please help if there is some configuration required in racHello
Please find the Oracle RAC Specific Configuration while implementing Oracle Bidirectional streaming Setup
#Propagation
queue_to_queue parameter
-- Assign Primary / Secondary Instance IDs
dbms_aqadm.alter_queue_table(queue_table => 'capture_srctab',
primary_instance => 1,
secondary_instance => 2);
dbms_aqadm.alter_queue_table(queue_table => 'apply_srctab',
primary_instance => 1,
secondary_instance => 2);
All Streams processing is done at the owning instance of the queue used by
the Streams client. To determine the owning instance of each ANYDATA queue
in a database, run the following query:
SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
q.QUEUE_TABLE = t.QUEUE_TABLE AND
q.OWNER = t.OWNER;
#tbsnames.ora
Service_name=global_name=db_name
Please find the metalink document
10gR2 Streams Recommended Configuration [ID 418755.1]
Regards
Hitgon -
Hi,
I am in process of setting up Oracle Streams schema level replication on version 10.2.0.3. I am able to setup replication for one table properly. Now I want to add 10 more new tables for schema level replication. Few questions regarding this
1. If I create new tables in source, shall I have to create tables in target database manually or I have to do export STREAMS_INSTANTIATION=Y
2. Can you tell me metalink note id to read more on this topic ?
thanks & regards
paragThe same capture and apply process can be used to replicate other tables. Following steps should suffice your need:
Say table NEW is the new table to be added with owner SANTU
downstr_cap is the capture process which is already running
downstr_apply is the apply process which is already there
1. Now stop the apply process
2. Stop the capture process
3. Add the new table in the capture process using +ve rule
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
table_name => 'SANTU.NEW',
streams_type => 'capture',
streams_name => 'downstr_cap',
queue_name => 'strmadmin.DOWNSTREAM_Q',
include_dml => true,
include_ddl => true,
source_database => ' Name of the source database ',
inclusion_rule => true
END;
4. Take export of the new table with "OBJECT_CONSISTENT=Y" option
5. Import the table at destination with "STREAMS_INSTANTIATION=Y' option
6. Start the apply process
7. Start the capture process -
Setup between Oracle streams and MQ
Hi All,
I m trying to create the setup between oracle streams and Messing Queue(MQ).i have already install MQClient on my machine and messing gateways is also working fine,but accoring to setup docs i have created one user having all the granst they provide in docs and created queuetable,queue,dml handler,tables rules and starting queue after that we have done Capture and Apply process on that user, in other word u have say the entire setup but our data is not propagating that messaage to MQ.
So if anyone has specific setup code then pl provide or any suggestion u want can give.
thnks & regards
SanjeevHi Sanjeev,
There is a special forum dedicated to Oracle Streams: Streams
You may want to try there.
Extra tip: post the code you used. That way it is easier for people to see what you might have done wrong.
Regards,
Rob. -
Oracle stream - Table instantiation
Hi,
I have included DDL rules in capture process. Everthing goes fine except when I am creating new table in source. It gives below error.
ORA-26687: no instantiation SCN provided for "PCAT_NT"."" in source database "PCAT"
My DDL rules are as below :
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'PCAT_NT_CAPTURE',
queue_name => 'STRMADMIN.STRMPCAT_QUEUE',
include_dml => false,
include_ddl => true,
source_database => 'PCAT',
inclusion_rule => TRUE,
and_condition => '(:ddl.get_command_type() = ''TRUNCATE TABLE'')'
END;
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'PCAT_NT_CAPTURE',
queue_name => 'STRMADMIN.STRMPCAT_QUEUE',
include_dml => false,
include_ddl => true,
source_database => 'PCAT',
inclusion_rule => TRUE,
and_condition => '(:ddl.get_command_type() = ''ALTER TABLE'')'
END;
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'PCAT_NT_CAPTURE',
queue_name => 'STRMADMIN.STRMPCAT_QUEUE',
include_dml => false,
include_ddl => true,
source_database => 'PCAT',
inclusion_rule => TRUE,
and_condition => '(:ddl.get_command_type() = ''CREATE INDEX'')'
END;
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'PCAT_NT_CAPTURE',
queue_name => 'STRMADMIN.STRMPCAT_QUEUE',
include_dml => false,
include_ddl => true,
source_database => 'PCAT',
inclusion_rule => TRUE,
and_condition => '(:ddl.get_command_type() = ''CREATE TABLE'')'
END;
RegardsI followed document id 733691.1 for downstream setup and cudn't see DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS procedure there.
Regards -
Hi,
Im having two datastores A and B. Whole datastore is replicated in two-way replication scheme.
My question is what are the steps to be taken if Im adding a new table to this setup.
After adding the new table in both the datastores, replication is not happening for the new table.
Do I need to do a duplicate operation on one of the card after creating a new table in the other?
Regards
PratheejHi Pratheej,
Please confirm my understanding that you are using legacy replication (CREATE REPLICATION as opposed to CREATE ACTIVE STANDBY PAIR). For legacy datastore level replication, when you create a new table in a replicated datastroe it is created as 'EXCLUDED" (i.e. as if it had existed at the time you created the replication scheme but you had explicitly EXCLUDED it. Hence to get that table into replication you need to 'ALTER TREPLICATION ... INCLUDE ...'.
If you are adding multiple tables you can do them all in one 'hit' so in steps 5 and 7 you would do all the tables at the same time hence only one duplicate is needed regardless of the number of tables being added.
If you were to use ACTIVE STANDBY PAIR replication then you could do this much more easily using DDL Replication.
Chris -
New tables in 12.1.3 not in 12.1.1 of Oracle Inventory module
Hi All,
I was analyzing 12.1.3. and found following as new tables in 12.1.3 compared to 12.1.1
MTL_CLIENT_PARAMETERS
MTL_BILLING_RULE_HEADERS_B
MTL_BILLING_RULE_HEADERS_TL
MTL_BILLING_RULE_LINES
MTL_TXNS_HISTORY
Can any one tell whether these tables contribute new functionality in oracle Inventory. If NOT then what is the purpose of these table in 12.1.3?
Appreciate Help.
Thanks,
SKOracle introduced a new functionality called Billing Rules in 12.1.3.
See Oracle Warehouse Management Release Notes, Release 12.1.3 [ID 1101620.1] for details of this new functionality.
Hope this helps,
Sandeep Gandhi -
Oracle 10.2.0.4:
I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
We have some tables witih BLOB data.It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
Justin -
Oracle stream not working as Logminer is down
Hi,
Oracle streams capture process is not capturing any updates made on table for which capture & apply process are configured.
Capture process & apply process are running fine showing enabled as status & no error. But, No new records are captured in ‘streams_queue_table’ when I update record in table, which is configured for capturing changes.
This setup was working till I got ‘ORA-01341: LogMiner out-of-memory’ error in alert.log file. I guess logminer is not capturing the updates from redo log.
Current Alert log is showing following lines for logminer init process
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
But same log was like this before
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
LOGMINER: session# = 1, reader process P002 started with pid=18 OS id=5812
LOGMINER: session# = 1, builder process P003 started with pid=36 OS id=3304
LOGMINER: session# = 1, preparer process P004 started with pid=37 OS id=1496We can clearly see reader, builder & preparer process are not starting after I got Out of memory exception in log miner.
To allocate more space to logminer, I tried to setup tablespace to logminer I got 2 exception which was contradicting each other error.
SQL> exec DBMS_LOGMNR.END_LOGMNR();
BEGIN DBMS_LOGMNR.END_LOGMNR(); END;
*ERROR at line 1:
ORA-01307: no LogMiner session is currently activeORA-06512: at "SYS.DBMS_LOGMNR", line 76
ORA-06512: at line 1
SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts');
BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts'); END;
*ERROR at line 1:
ORA-01356: active logminer sessions foundORA-06512: at "SYS.DBMS_LOGMNR_D", line 232
ORA-06512: at line 1
When I tried stopping logminer exception was ‘no logminer session is active’, But when I tried to setup tablespace exception was ‘active logminer sessions found’. I am not sure how to resolve this issue.
Please let me know how to resolve this issue.
Thanks
sivaThe Logminer session associated with a capture process is a special kind of session which is called a "persistent session". You will not be able to stop it using DBMS_LOGMNR. This package controls only non-persistent sessions.
To stop the persistent LogMiner session you must stop the capture process.
However, I think your problem is more related to a lack of RAM space instead of tablespace (i. e, disk) space. Try to increase the size of the SGA allocated to LogMiner, by setting capture parameter SGASIZE. I can see you are using the default of 10M, which may be not enough for your case. Of course, you will have to increase the values of init parameters streams_pool_size, sga_target/sga_max_size accordingly, to avoid other memory problems.
To set the SGASIZE parameter, use the PL/SQL procedure DBMS_CAPTURE_ADM.SET_PARAMETER. The example below would set it to 100Megs:
begin
DBMS_CAPTURE_ADM.set_parameter('<name of capture process','_SGA_SIZE','100');
end;
I hope this helps.
Ilidio. -
Oracle stream - first_scn and start_scn
Hi,
My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
Once I will start the capture from which SCN it will start to capture from archive log?
Regards,I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
Regards,
Below is the setup doc.
1.1 Create the Streams Queue
conn STRMADMIN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'NIG_Q_TABLE',
queue_name => 'NIG_Q',
queue_user => 'STRMADMIN');
END;
1.2 Create apply process for the Schema
BEGIN
DBMS_APPLY_ADM.CREATE_APPLY(
queue_name => 'NIG_Q',
apply_name => 'NIG_APPLY',
apply_captured => TRUE
END;
1.3 Setting up parameters for Apply
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
/********** STEP 2.- Downstream capture process *****************/
2.1 Create the downstream capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE (
queue_name => 'NIG_Q',
capture_name => 'NIG_CAPTURE',
rule_set_name => null,
start_scn => null,
source_database => 'PNID.LOUDCLOUD.COM',
use_database_link => true,
first_scn => null,
logfile_assignment => 'IMPLICIT');
END;
2.2 Setting up parameters for Capture
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
2.3 Add the table level rule for capture
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'NIG.BUILD_VIEWS',
streams_type => 'CAPTURE',
streams_name => 'NIG_CAPTURE',
queue_name => 'STRMADMIN.NIG_Q',
include_dml => true,
include_ddl => true,
source_database => 'PNID.LOUDCLOUD.COM'
END;
/**** Step 3 : Initializing SCN on Downstream database—start from here *************/
import
=================
impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
/********** STEP 4.- Start the Apply process ********************/
sqlplus STRMADMIN
exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY'); -
Having trouble setting up Oracle streams. Help!
I'm running Oracle 10g Enterprise server and am trying to set up a streams connection between two databases on the server.
I'm using the Enterprise Manager grid control console to do this. I log into database_1 and I go to the streams section and pick the "Streams Global, Schema, Table and Subset Replication Wizard" link in the setup options.
I create my streams admin account. Question: Does the Streams Administrator have to be account with a dba role? I just set my dba Streams Admin the same as my 'sys' account.
I go on the the Configure Streams: Destination Database.
I put in the host name, port, sid and streams admin and pwd for database_1
When i click next i'm getting the following error.
"There is a problem with the destination connection information. It could be due to invalid host credentials and / or invalid Streams Administrator credentials."
I'm pretty sure my connection information is correct. I can connect to both databases using SQLdeveloper using the same connection info. And I think i properly set up the streams Admin for my database_2. So i'm not sure why i'm getting the error.
can anyone help?I had a similar problem. Another issue that came along with this was that I was unable to set the credentials for one of the monitored servers in my Stream configuration. It turns out that the agent that I was using wasn't compatible with my GRID installation. I upgraded my agent, verified that I could set the preferred credentials, then successfully set up Streams.
Hope that this helps. -
Help on Oracle streams 11g configuration
Hi Streams experts
Can you please validate the following creation process steps ?
What is need to have streams doing is a one way replication of the AR
schema from a database to another database. Both DML and DDL shall do
the replication of the data.
Help on Oracle streams 11g configuration. I would also need your help
on the maintenance steps, controls and procedures
2 databases
1 src as source database
1 dst as destination database
replication type 1 way of the entire schema FaeterBR
Step 1. Set all databases in archivelog mode.
Step 2. Change initialization parameters for Streams. The Streams pool
size and NLS_DATE_FORMAT require a restart of the instance.
SQL> alter system set global_names=true scope=both;
SQL> alter system set undo_retention=3600 scope=both;
SQL> alter system set job_queue_processes=4 scope=both;
SQL> alter system set streams_pool_size= 20m scope=spfile;
SQL> alter system set NLS_DATE_FORMAT=
'YYYY-MM-DD HH24:MI:SS' scope=spfile;
SQL> shutdown immediate;
SQL> startup
Step 3. Create Streams administrators on the src and dst databases,
and grant required roles and privileges. Create default tablespaces so
that they are not using SYSTEM.
---at the src
SQL> create tablespace streamsdm datafile
'/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
---at the replica:
SQL> create tablespace streamsdm datafile
---at both sites:
'/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
SQL> create user streams_adm
identified by streams_adm
default tablespace strepadm01
temporary tablespace temp;
SQL> grant connect, resource, dba, aq_administrator_role to
streams_adm;
SQL> BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee => 'streams_adm',
grant_privileges => true);
END;
Step 4. Configure the tnsnames.ora at each site so that a connection
can be made to the other database.
Step 5. With the tnsnames.ora squared away, create a database link for
the streams_adm user at both SRC and DST. With the init parameter
global_name set to True, the db_link name must be the same as the
global_name of the database you are connecting to. Use a SELECT from
the table global_name at each site to determine the global name.
SQL> select * from global_name;
SQL> connect streams_adm/streams_adm@SRC
SQL> create database link DST
connect to streams_adm identified by streams_adm
using 'DST';
SQL> select sysdate from dual@DST;
SLQ> connect streams_adm/streams_adm@DST
SQL> create database link SRC
connect to stream_admin identified by streams_adm
using 'SRC';
SQL> select sysdate from dual@SRC;
Step 6. Control what schema shall be replicated
FaeterBR is the schema to be replicated
Step 7. Add supplemental logging to the FaeterBR schema on all the
tables?
SQL> Alter table FaeterBR.tb1 add supplemental log data
(ALL) columns;
SQL> alter table FaeterBR.tb2 add supplemental log data
(ALL) columns;
etc...
Step 8. Create Streams queues at the primary and replica database.
---at SRC (primary):
SQL> connect stream_admin/stream_admin@ORCL
SQL> BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'streams_adm.FaeterBR_src_queue_table',
queue_name => 'streams_adm.FaeterBR_src__queue');
END;
---At DST (replica):
SQL> connect stream_admin/stream_admin@STR10
SQL> BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'stream_admin.FaeterBR_dst_queue_table',
queue_name => 'stream_admin.FaeterBR_dst_queue');
END;
Step 9. Create the capture process on the source database (SRC).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name =>'FaeterBR',
streams_type =>'capture',
streams_name =>'FaeterBR_src_capture',
queue_name =>'FaeterBR_src_queue',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database => NULL,
inclusion_rule => true);
END;
Step 10. Instantiate the FaeterBR schema at DST. by doing export
import : Can I use now datapump to do that ?
---AT SRC:
exp system/superman file=FaeterBR.dmp log=FaeterBR.log
object_consistent=y owner=FaeterBR
---AT DST:
---Create FaeterBR tablespaces and user:
create tablespace FaeterBR_datafile
'/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
create tablespace ws_app_idx datafile
'/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
create user FaeterBR identified by FaeterBR_
default tablespace FaeterBR_
temporary tablespace temp;
grant connect, resource to FaeterBR;
imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
touser=FaeterBR streams_instantiation=y
Step 11. Create a propagation job at the source database (SRC).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name =>'FaeterBR',
streams_name =>'FaeterBR_src_propagation',
source_queue_name =>'stream_admin.FaeterBR_src_queue',
destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database =>'SRC',
inclusion_rule =>true);
END;
Step 12. Create an apply process at the destination database (DST).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name =>'FaeterBR',
streams_type =>'apply',
streams_name =>'FaeterBR_Dst_apply',
queue_name =>'FaeterBR_dst_queue',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database =>'SRC',
inclusion_rule =>true);
END;
Step 13. Create substitution key columns for äll the tables that
haven't a primary key of the FaeterBR schema on DST
The column combination must provide a unique value for Streams.
SQL> BEGIN
DBMS_APPLY_ADM.SET_KEY_COLUMNS(
object_name =>'FaeterBR.tb2',
column_list =>'id1,names,toys,vendor');
END;
Step 14. Configure conflict resolution at the replication db (DST).
Any easier method applicable the schema?
DECLARE
cols DBMS_UTILITY.NAME_ARRAY;
BEGIN
cols(1) := 'id';
cols(2) := 'names';
cols(3) := 'toys';
cols(4) := 'vendor';
DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
object_name =>'FaeterBR.tb2',
method_name =>'OVERWRITE',
resolution_column=>'FaeterBR',
column_list =>cols);
END;
Step 15. Enable the capture process on the source database (SRC).
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'FaeterBR_src_capture');
END;
Step 16. Enable the apply process on the replication database (DST).
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'FaeterBR_DST_apply');
END;
Step 17. Test streams propagation of rows from source (src) to
replication (DST).
AT ORCL:
insert into FaeterBR.tb2 values (
31000, 'BAMSE', 'DR', 'DR Lejetoej');
AT STR10:
connect FaeterBR/FaeterBR
select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
Any other test that can be made?Check the metalink doc 301431.1 and validate
How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Cheers. -
Oracle Financials AP 11i tables Vs R12 tables
Hello,
I have a package that works in Oracle Apps 11i instance. This package is using 11i tables in its cursors. we are moving this package to r12 and thus this package has to run in r12 using r12 tables. I have heard that a lot of tables have been changed from 11i to r12 in oracle financials. here is the code.
Some of the tables in the AP that were used in 11i were AP_BANK_ACCOUNTS_ALL, AP_BANK_BRANCHES, AP_CHECKS_ALL,
AP_INV_SELECTION_CRITERIA_ALL,
fnd_lookup_values_vl. I need to know their R12 counterparts or the R12 tables that needs to be inserted here.
SELECT DISTINCT abb.attribute1 attribute1
FROM APPS.ap_bank_accounts_all ABA,
APPS.ap_bank_branches ABB,
XXAP.XXAP_CD_PAYBATCH_APPR_OUT XCOUT,
APPS.AP_INV_SELECTION_CRITERIA_ALL AISC
WHERE aisc.checkrun_id = xcout.batch_id
AND aisc.bank_account_id = aba.bank_account_id
AND aba.bank_branch_id = abb.bank_branch_id
SELECT ABB.ATTRIBUTE1 MESSAGE_ID,
FROM APPS.AP_BANK_ACCOUNTS_ALL ABA,
APPS.AP_BANK_BRANCHES ABB,
APPS.AP_CHECKS_ALL CHK,
APPS.AP_INV_SELECTION_CRITERIA_ALL AISC,
APPS.fnd_lookup_values_vl FLV,
XXAP.XXAP_APP_OUT XCOUT
WHERE ABA.bank_branch_id = ABB.bank_branch_id
AND AISC.bank_account_id = ABA.bank_account_id
AND XCOUT.batch_id = AISC.checkrun_id
AND CHK.checkrun_id = AISC.checkrun_id
AND CHK.PAYMENT_METHOD_LOOKUP_CODE = AISC.PAYMENT_METHOD_LOOKUP_CODE
AND ABA.ATTRIBUTE4 = FLV.LOOKUP_CODE933951 wrote:
Hello Srini,
Thank your for the reply. Appreciate it.
I have taken a look at the zip file from that Steven Chen blog, but i guess that does not really help me at the moment i guess. It tells us changes have been made but what needs to be replaced with which tables hasnt been mentioned.
I have taken a look at the document "Bank Setups in R12 Question [ID 434195.1]" and it provided me this below information. Is it right if i replace the tables on the left with the tables on the right.
AP_BANK_ACCOUNTS_ALL --> CE_BANK_ACCOUNTS
AP_BANK_ACCOUNTS_USES_ALL --> CE_BANK_ACCT_USES_ALL
AP_BANK_BRANCHES --> CE_BANK_BRANCHES_VCorrect - pl see this MOS Doc
R12 Cash Management 'How To' documents [ID 580516.1]
AP_CHECKS_ALL -- ??
AP_INV_SELECTION_CRITERIA_ALL -- ??These two tables are still valid for R12. Pl see
What are the new tables in R12 for Payables and to what R12 objects are obsoleted 11i tables mapped [ID 1290116.1]
>
Is the above derived information right? Can you please help me in this? Thank you
Also, i have seen a lot of threads with the same requirement, so i guess that should help a lot of other people.
Thank youHTH
Srini -
Oracle Streams & Oracle Real Application Clusters
Hello...i'am developing a new replication system to my company using Oracle Streams. I have already achieved data replication to a downstream database but now i would like to do it but in an RAC environment. So, i will appreciate any help you can give me. Best regards, walny
I've been researching but now i have another doubt, i have a cluster of five instances, two of them donwstream, one primary and one secundary, i don't now if the standby redo log files configured for the primary instance will be the same for the secundary instance, what i want to achieve is the HA of the replication environment, so if i configured standby redo log files in both instances and with just one group the problem is solved, i'll be wasting resources.
i hope you can help me
regards
Edited by: walny on 08-abr-2010 10:45
Edited by: walny on 08-abr-2010 10:46
Maybe you are looking for
-
Automatically download and open file configuration
I have deployed an application with jnlp and whenever I click on a link , It ask for prompt with option of opening application with java web start.But I want it will not ask for prompt and will opened automatically.
-
How do i fix an ipod that wont restore
ij
-
How to apply ICC profiles on all the photos in preset before printing. and user preset
Hi, hope someone can point me to the right direction.. I have 2 questions about ICC profiles and settings in lightroom. 1.) I found that many photos that I have is a little two blue. There are thousands of photos. I would like to know if there is
-
Installed newest Flash Player, "An error occured" message on Youtube videos not working
I have the latest version of Flash Player (12.0.0.70) and Firefox (27.0.1) installed but I'm getting the "An error occured" message within 1 or 2 seconds of playing a YouTube video. If I uninstall Flash from the computer it works perfectly. Why is th
-
MY zen is now version2..0 and it doesn't show up on the computer in the media organizer what the heck do i do i have updated it please just send links to this i find this extremly differcult to do I NEED A RESPONSE