Oracle Streams Implementing Step by Step in oracle10g

how to implement Oracle streams in Oracle10g provide a step by step Document
regarding Oracle Streams implementation.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14229/toc.htm
Chapter 27 and 28
I found this with a few mouse clicks!!!
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Oracle Streams implementation

    Hi all,
    I am a newbie, we have a production DB on HP-Ux ver.11.23 server and a new staging DB on RedHat Linux server.
    now we have to synchronize the data on staging server for reports purpose.
    what would be the better options for data synchronization.
    1. Oracle Strems
    2. Oracle Golden Gate
    or
    Shell Scripting.
    for these implementations we to bounce the production Db?? or not??

    You did not described much about the staging server. The answer depend on SLA and license and the nature of the staging server.
    In all cases, forget Korn shell, Rman is better in this case and this advice comes from somebody doing daily korn shell since 20 years.
    No reason to pay for Goilden gate, Streams is free and built in Enterprise Edition while master slave Streams is easy setup.
    If you staging server is a replica of production then if your DB is small and SLA on target (reporting DB) allows shut-down and you don't need the latest data (say one day back)
    then script an automated refresh using RMAN duplicate command.
    if staging area accumulate data, then Streams is easy and manage old data using drop partitions (got licence?). Streams is not complicated in Master/slave which is your case.
    You will probably go for a Schema to schema replication with one capture process, one propagation and one apply process.
    Having said that, appetite comes while eating, so expect people request transformations on the data, from source to target, this I promise you.

  • OBIA 7.9.5 implementation steps A to Z

    Hi All,
    I am planning to implement sales Analytics for the same Im installing OBI Apps 7.9.5 along with OBIEE 10.1.3.1.4 & Informatica powercenter SP4 ver 8.1.1.
    I have 3 available machines (windows) having sufficient resources. And so is my below environment/topology setup.
    Machine A)
    OBIEE complete + DAc Clients + Informatica clients.
    Machine B)
    DAC & Informatica Servers.
    Machine C)
    All databases OTLP & OLAP.
    I will keep you all posted on this & let you know the next steps I will be taking & the issues I face. Also will be doing this setup complete & revert back.
    Please comment if any issues you had with such type of setup.
    Thanks,
    Dev

    Thanks for this thread man. I tried to implement it. But I tried to implement the whole thing on single system. Y don't we use single system for all the above 3?
    My implementation steps,
    Windows Server 2003.
    Installed Oracle EBS 11i,
    Installed OBIEE,
    Installed Oracle BI Applications 7.9.6.1 and also installed 7.9.5.1 ( installation failed, I can get DAC but not Analytics repository)
    Installed Informatica 8.6.1 and 7.1.1 but unable to connect to Oracle 9i database which is installed with Oracle EBS, got problem which created repository service. I guess the main reason is Oracle Database is not allowing informatica to connect to the user i created for Informatica. GOT Struck hear, let us see how your implementation works, If your's work fine i can try it.
    Thanks,
    Pratap.
    Edited by: N.V.S.Pratap on Nov 6, 2009 6:05 AM

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Creating Oracle Inventory Installation Step Does Not Finish and Hangs at 99% while installing Hyperion 11.1.1.4

    Hi Guys,
    Creating Oracle Inventory Installation Step Does Not Finish and Hangs at 99% while installing Hyperion 11.1.1.4.I'm running the installer from a local drive and also noticed that the uninstaller files are not created.The installer process has been running for 4+ hours.
    Any suggestions/tips?
    Thanks
    Manoj

    hi John,
    we left the installer running overnight, but the install process has still not completed.Also it has not created any uninstaller files under the following directory:
    E:\APPS\Hyperion\uninstall
    what could be the reason for that?
    Thanks.
    Manoj

  • Off-Cycle Payroll implementation steps

    Hi Gurus,
    Anybody is having Off-cycle Payroll implementation Steps?
    Thanks in Advance
    Anish

    Hi
    Have a look on the link:
    off cycle payroll
    Regards,
    Sreeram

  • Implement Oracle Streams without having Primary Keys

    In our environment, we can't create primary keys so anyone know how to implement oracle streams without having primary keys on the tables.
    Thanks,

    You can JOIN tables on any compatible columns or functions (like LEFT (PONumber,10) ):
    http://technet.microsoft.com/en-us/library/ms191517(v=sql.105).aspx
    BOL: "When the (ON) condition specifies columns, the columns do not have to have the same name or same data type; however, if the data types are not the same, they must be either compatible or types that SQL Server can implicitly convert. If the data types
    cannot be implicitly converted, the condition must explicitly convert the data type by using the CONVERT function. "
    LINK:
    http://technet.microsoft.com/en-us/library/ms177634(v=sql.105).aspx
    Typically tables are JOIN-ed on FK-s and PK-s.
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Oracle Streams Update conflict handler not working

    Hello,
    I've been working on the Oracle streams and this time we've to come up with Update conflict handler.
    We are using Oracle 11g on Solaris10 env.
    So far, we have implemented bi-directional Oracle Streams Replication and it is working fine.
    Now, when i try to implement Update conflict handler - it executed successfully but it is not fulfilling the desired functionality.
    Here are the steps i performed:
    Steap -1:
    create table test73 (first_name varchar2(20),last_name varchar2(20), salary number(7));
    ALTER TABLE jas23.test73 ADD (time TIMESTAMP WITH TIME ZONE);
    insert into jas23.test73 values ('gugg','qwer',2000,SYSTIMESTAMP);
    insert into jas23.test73 values ('papa','sdds',2050,SYSTIMESTAMP);
    insert into jas23.test73 values ('jaja','xzxc',2075,SYSTIMESTAMP);
    insert into jas23.test73 values ('kaka','cvdxx',2095,SYSTIMESTAMP);
    insert into jas23.test73 values ('mama','rfgy',1900,SYSTIMESTAMP);
    insert into jas23.test73 values ('tata','jaja',1950,SYSTIMESTAMP);
    commit;
    Step-2:
    conn to strmadmin/strmadmin to server1:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    Step-3
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-4
    conn to strmadmin/strmadmin to server2
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-5
    And now, if i try to update the value of salary, then it is not getting handled by update conflict handler.
    update jas23.test73 set salary = 1500,time=SYSTIMESTAMP where first_name='papa'; --server1
    update jas23.test73 set salary = 2500,time=SYSTIMESTAMP where first_name='papa'; --server2
    commit; --server1
    commit; --server2
    Note: Both the servers are into different timezone (i hope it wont be any problem)
    Now, after performing all these steps - the data is not same at both sites.
    Error(DBA_APPLY_ERROR) -
    ORA-26787: The row with key ("FIRST_NAME", "LAST_NAME", "SALARY", "TIME") = (papa, sdds, 2000, 23-DEC-10 05.46.18.994233000 PM +00:00) does not exist in ta
    ble JAS23.TEST73
    ORA-01403: no data found
    Please help.
    Thanks.
    Edited by: gags on Dec 23, 2010 12:30 PM

    Hi,
    When i tried to do it on Server-2:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    it throws me an error,
    Error -
    ERROR at line 1:
    ORA-32588: supplemental logging attribute all column exists

  • Oracle Streams in 10g

    Hi All,
    I am new to Oracle Streams in 10g, Could any one explain me what is Oracle streams and why we have to use it and related documents.
    Please don't hesitate me to reply.
    Thanks,
    Redro.

    You can get overview from the below link:
    http://www.oracle.com/technology/products/dataint/htdocs/streams_fo.html
    Step by Step guide to implement streams:
    http://it.toolbox.com/blogs/oracle-guide/oracle-streams-step-by-step-17095

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

  • Oracle stream - first_scn and start_scn

    Hi,
    My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
    Once I will start the capture from which SCN it will start to capture from archive log?
    Regards,

    I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
    Regards,
    Below is the setup doc.
    1.1 Create the Streams Queue
    conn STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'NIG_Q_TABLE',
    queue_name => 'NIG_Q',
    queue_user => 'STRMADMIN');     
    END;
    1.2 Create apply process for the Schema
    BEGIN
    DBMS_APPLY_ADM.CREATE_APPLY(
    queue_name => 'NIG_Q',
    apply_name => 'NIG_APPLY',
    apply_captured => TRUE
    END;
    1.3 Setting up parameters for Apply
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
    /********** STEP 2.- Downstream capture process *****************/
    2.1 Create the downstream capture process
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE (
    queue_name => 'NIG_Q',
    capture_name => 'NIG_CAPTURE',
    rule_set_name => null,
    start_scn => null,
    source_database => 'PNID.LOUDCLOUD.COM',
    use_database_link => true,
    first_scn => null,
    logfile_assignment => 'IMPLICIT');
    END;
    2.2 Setting up parameters for Capture
    exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
    exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
    2.3 Add the table level rule for capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    /**** Step 3 : Initializing SCN on Downstream database—start from here *************/
    import
    =================
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
    /********** STEP 4.- Start the Apply process ********************/
    sqlplus STRMADMIN
    exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY');

  • Oracle Streams vs. Updatable Materialized View

    Does anyone have an idea in which cases Oracle Streams is better than Updatable MV or visa verse?

    Are you really talking about Updatable Materialized Views? Or multi-master replication? Personally, I'm rather hard-pressed to come up with a situation where updatable materialized views would be useful unless you're taking the next step and doing multi-master replication.
    In general, Streams is going to put less load on the source system than materialized views and is going to replicate data more quickly. The downside tends to be that it's a relatively new technology, so it's not appropriate for environments that have older versions of Oracle. Going along with that, you'll find a lot more people/ organizations/ setups using materialized views than Streams, which can be a good thing if you need to hire new staff/ get support from a local user group/ etc. Streams also tends to be more flexible, which can be a good thing, but also tends to make things a bit more complicated.
    If you can outline the particular problem you're trying to solve, we can probably be a lot more specific...
    Justin

  • Oracle Streams - First Load

    Hi,
    I have an Oracle Streams environment working well. I replicate and transform the data.
    My problem is:
    Initially I have a source database with 3 million of records and my destination database with no records.
    I have to equalize source and destination database before start to synchronize it.
    Do you know how can I replicate (and transform) this data for my first load?
    It's not only to copy all the data, it's to copy and transform.
    Is it possible to use the same transformation process for this first load?
    If I didn't transform the data I would use the Data Pump tool(e.g.). But I have to transform the data for my destination database.
    Thanks

    I am in DAC and trying to run the Informatica ETL for one of the prebuilt execution plan (HR- Oracle R12). I built the project plan and ran it. I got failed status for all of the ETL steps (Starting from the first one 'Load Row into Run table'). I have attached the error log for the Load Row into Run table ibelow.
    I took a closer look at all the steps, and it seem like they all have this common "fail parent if this task fails" error message.
    Error log for
    pmcmd startworkflow -u Administrator -p **** -s SOBI:4006 -f SILOS -lpf C:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
    Status Desc : Failed
    WorkFlowMessage :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.1.1 SP5], build [186.0822], Windows 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu May 07 14:46:04 2009
    Connected to Integration Service at [SOBI:4006]
    Folder: [SILOS]
    Workflow: [SIL_InsertRowInRunTable] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SIL_InsertRowInRunTable] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_InsertRowInRunTable] will be failed.]
    Start time: [Thu May 07 14:45:43 2009]
    End time: [Thu May 07 14:45:47 2009]
    Workflow log file: [C:\Informatica\PowerCenter8.1.1\server\infa_shared\WorkflowLogs\SIL_InsertRowInRunTable.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Integration Service: [Oracle_BI_DW_Base_Integration_Service]
    Disconnecting from Integration Service
    Completed at Thu May 07 14:46:04 2009
    =====================================
    ERROR OUTPUT
    =====================================
    Error Message : Unknown reason for error code 36331
    ErrorCode : 36331
    If you have any input on how to fix this issue, please let me know.

Maybe you are looking for

  • Follow up Business Event by specific qualification date.

    Hi Expert, In the follow up business event, I want to transfer qualification to attendee with the start and end date of business event. For example, the business event date is held on 07.05.2007 – 11.05.2007 and the qualification will be given to all

  • USING TWO AIRPORT EXTREMES (802.11N) INTERNET IS VERY SLOW

    I live in a 5000sq ranch in arizona and have our internet set up in one end of the house. We have an airport extreme 802.11n and it wasnt putting out a good signal to the other end of the house so we got another airport extreme and airport express to

  • Need the WM structure info

    Dear Experts, I have the following case where i need to define the structure in warehouse complex.In the current non-SAP system the entire warehouse area has a highrack storage, where there are about 100 materials stored.The WM users have defined sto

  • Dont want to put MBP into sleep

    Hi, I download many things from internet and my internet connection is very slow. It takes ages to download. If i close my lid it switches to sleep mode and the download stopped. I dont want my Macbook pro switch to sleep when i close the lid. Is the

  • Fulltext-search using contains doesn't deliver the exspected result

    Hello, under SQL-Server 2012 I have a table like create table test   id integer not null,   data varbinary(max) not null,   constraint pk_test primary key (id) and a fulltextindex on it like create fulltext catalog fttest; create fulltext index on te