Complex Oracle Streams issue - Update conflicts

This is for Oracle streams replication on 11g r2.
I am facing update conflicts in a table. The conflict arise due to technical and business logic issue. The business logic will pass through the replication/apply process successfully but we want to arrest and resolve it before replication for our requirements. These are typically a bit complex cases and we are exploring the possibility of having both DML handlers and Error handlers. The DML handlers will take care of business logic conflicts and Error handler for technical issues before pushing it to Error queue by Streams. Based on our understanding and verification, we found a limitation to configure both procedure DML handler and Error handler together for the same table operation.
Statement handlers can not be used for our conflict scenarios.
Following are my questions:
1. Have anyone implemented or faced such a scenario in their real time application? If yes, can you please share some insights or inputs?
2. Is there a custom way to handle this complex problem of configuring both DML and Error handler?
3. Is there any alternative possible way to resolve this situation at Oracle streams environment with other handlers?

Dear All
I too have a similar requirement. Could anyone help with one?
We can handle the error-ing transactions via Error Handler procedures.
But we can not configure the DML handler procedure for transactions that are successfully replicated. STreams does not allow us to configure a handler for this. Is there any other handler / procedures / hooks in streams where we can implement the desired functionality - which includes changing the values in the LCR before invoking lcr.execute() and we should be able to discard the LCR also if required.
Regards
Velmurugan
Edited by: 982387 on Jan 16, 2013 11:25 PM
Edited by: 982387 on Jan 16, 2013 11:27 PM

Similar Messages

  • Oracle Streams Update conflict handler not working

    Hello,
    I've been working on the Oracle streams and this time we've to come up with Update conflict handler.
    We are using Oracle 11g on Solaris10 env.
    So far, we have implemented bi-directional Oracle Streams Replication and it is working fine.
    Now, when i try to implement Update conflict handler - it executed successfully but it is not fulfilling the desired functionality.
    Here are the steps i performed:
    Steap -1:
    create table test73 (first_name varchar2(20),last_name varchar2(20), salary number(7));
    ALTER TABLE jas23.test73 ADD (time TIMESTAMP WITH TIME ZONE);
    insert into jas23.test73 values ('gugg','qwer',2000,SYSTIMESTAMP);
    insert into jas23.test73 values ('papa','sdds',2050,SYSTIMESTAMP);
    insert into jas23.test73 values ('jaja','xzxc',2075,SYSTIMESTAMP);
    insert into jas23.test73 values ('kaka','cvdxx',2095,SYSTIMESTAMP);
    insert into jas23.test73 values ('mama','rfgy',1900,SYSTIMESTAMP);
    insert into jas23.test73 values ('tata','jaja',1950,SYSTIMESTAMP);
    commit;
    Step-2:
    conn to strmadmin/strmadmin to server1:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    Step-3
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-4
    conn to strmadmin/strmadmin to server2
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-5
    And now, if i try to update the value of salary, then it is not getting handled by update conflict handler.
    update jas23.test73 set salary = 1500,time=SYSTIMESTAMP where first_name='papa'; --server1
    update jas23.test73 set salary = 2500,time=SYSTIMESTAMP where first_name='papa'; --server2
    commit; --server1
    commit; --server2
    Note: Both the servers are into different timezone (i hope it wont be any problem)
    Now, after performing all these steps - the data is not same at both sites.
    Error(DBA_APPLY_ERROR) -
    ORA-26787: The row with key ("FIRST_NAME", "LAST_NAME", "SALARY", "TIME") = (papa, sdds, 2000, 23-DEC-10 05.46.18.994233000 PM +00:00) does not exist in ta
    ble JAS23.TEST73
    ORA-01403: no data found
    Please help.
    Thanks.
    Edited by: gags on Dec 23, 2010 12:30 PM

    Hi,
    When i tried to do it on Server-2:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    it throws me an error,
    Error -
    ERROR at line 1:
    ORA-32588: supplemental logging attribute all column exists

  • Oracle Streams vs. Updatable Materialized View

    Does anyone have an idea in which cases Oracle Streams is better than Updatable MV or visa verse?

    Are you really talking about Updatable Materialized Views? Or multi-master replication? Personally, I'm rather hard-pressed to come up with a situation where updatable materialized views would be useful unless you're taking the next step and doing multi-master replication.
    In general, Streams is going to put less load on the source system than materialized views and is going to replicate data more quickly. The downside tends to be that it's a relatively new technology, so it's not appropriate for environments that have older versions of Oracle. Going along with that, you'll find a lot more people/ organizations/ setups using materialized views than Streams, which can be a good thing if you need to hire new staff/ get support from a local user group/ etc. Streams also tends to be more flexible, which can be a good thing, but also tends to make things a bit more complicated.
    If you can outline the particular problem you're trying to solve, we can probably be a lot more specific...
    Justin

  • Oracle Replication update conflict handler creating notifications

    Hi All,
    I want to use an update conflict handler that overwrites the records in the destination database, while notifying the dba about the conflict by creating a log entry to a log file (alert log or other custom log file).
    I know that I can use prebuilt overwrite update conflict handler to overwrite the conflicting record. As far as I know there is no way to create a log entry using this prebuilt conflict handlers.
    Is there a way to create a log entry via prebuilt conflict handlers?
    Or else I can use pl/sql to create a log entry and a pl/sql can be used as conflict handler.
    But how can I create a pl/sql overwrite conflict handler?
    Thank you,
    Sasika.

    You must create your own update conflict handler.
    Conflict handler is a PL/SQL procedure (usually in package). For given conflict type is defined what parameters this procedure must have.
    And in procedure you simply put locig you need:
    1/ write a log entry
    2/ update (overwrite) data in table

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

  • 10g streams and resolving conflicts

    Hello
    i configured oracle streams on 2 databases using the wizards in enterprise manage (10gR2) r, the aim is to set up multi master replication quickly and easily.
    If i do this how do i resolve conflicts or are they resolved automatically? I read that you can call
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'hr.locations',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    but this is not inserted into teh scripts generated by enterprise manager. Is it there a default conflict policy? or can i
    configure this globally?
    Also i noticed that my servers from time to time stop reqwplication and flow gets paused with an enqueue error. Is this just a resource issue?
    I am limited currently as I am trying this out with vm images.
    Am willing to upgrade to 11g if functionality of streams is better.
    Thanks
    Edited by: user573756 on 06-Jun-2010 07:45

    You are seriously confusing totally different technologies.
    The name of this forum is Advanced Queueing.
    Your question appears to be about Streams
    Then you ask about multimaster replication which sounds like Advanced Replication.
    What are you actually doing and with what technology are you doing it?
    I have no experience doing this with EM, only hand coded scripts via SQL*Plus so I can not comment on what EM is doing or not doing.
    But without a clear understanding of what conflicts are possible in the case of your design it is impossible to advise you further. A well
    designed system should make conflicts if not impossible ... extremely rare and narrowly focused.

  • Oracle stream not working as Logminer is down

    Hi,
    Oracle streams capture process is not capturing any updates made on table for which capture & apply process are configured.
    Capture process & apply process are running fine showing enabled as status & no error. But, No new records are captured in ‘streams_queue_table’ when I update record in table, which is configured for capturing changes.
    This setup was working till I got ‘ORA-01341: LogMiner out-of-memory’ error in alert.log file. I guess logminer is not capturing the updates from redo log.
    Current Alert log is showing following lines for logminer init process
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
    LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
    But same log was like this before
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
    LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
    LOGMINER: session# = 1, reader process P002 started with pid=18 OS id=5812
    LOGMINER: session# = 1, builder process P003 started with pid=36 OS id=3304
    LOGMINER: session# = 1, preparer process P004 started with pid=37 OS id=1496We can clearly see reader, builder & preparer process are not starting after I got Out of memory exception in log miner.
    To allocate more space to logminer, I tried to setup tablespace to logminer I got 2 exception which was contradicting each other error.
    SQL> exec DBMS_LOGMNR.END_LOGMNR();
    BEGIN DBMS_LOGMNR.END_LOGMNR(); END;
    *ERROR at line 1:
    ORA-01307: no LogMiner session is currently activeORA-06512: at "SYS.DBMS_LOGMNR", line 76
    ORA-06512: at line 1
    SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts');
    BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts'); END;
    *ERROR at line 1:
    ORA-01356: active logminer sessions foundORA-06512: at "SYS.DBMS_LOGMNR_D", line 232
    ORA-06512: at line 1
    When I tried stopping logminer exception was ‘no logminer session is active’, But when I tried to setup tablespace exception was ‘active logminer sessions found’. I am not sure how to resolve this issue.
    Please let me know how to resolve this issue.
    Thanks
    siva

    The Logminer session associated with a capture process is a special kind of session which is called a "persistent session". You will not be able to stop it using DBMS_LOGMNR. This package controls only non-persistent sessions.
    To stop the persistent LogMiner session you must stop the capture process.
    However, I think your problem is more related to a lack of RAM space instead of tablespace (i. e, disk) space. Try to increase the size of the SGA allocated to LogMiner, by setting capture parameter SGASIZE. I can see you are using the default of 10M, which may be not enough for your case. Of course, you will have to increase the values of init parameters streams_pool_size, sga_target/sga_max_size accordingly, to avoid other memory problems.
    To set the SGASIZE parameter, use the PL/SQL procedure DBMS_CAPTURE_ADM.SET_PARAMETER. The example below would set it to 100Megs:
    begin
    DBMS_CAPTURE_ADM.set_parameter('<name of capture process','_SGA_SIZE','100');
    end;
    I hope this helps.
    Ilidio.

  • Oracle Streams and Oracle Apps  11i

    Hi,
    I am looking for an oracle solution to build a reporting instance off an e-business suite 11i and offload discoverer reporting to the reporting instance. I have tried dataguard logical standby but too many issues and I cannot use. I am wondering if anyone tried using oracle streams to build a reporting instance from an oracle apps instance?
    Thanks

    Hi,
    Is Streams supported in an E-Business Suite database? We once had a scenerio where some parameters for Streams were conflicting with parameters required for E-Business Suite performance improvement. I could not find any documents from Oracle to verify whether Streams on E-Business Suite database was a supported configuration or not.
    So if you have any such document, could you send the link to me?
    Also please provide the link to your white paper/presentation.
    Regards,
    Sujoy

  • Unable to create web app with PowerShell - An update conflict has occurred

    I am trying to create a new Web Application using PowerShell. I need to use PowerShell because I need to create the Web App using classic authentication and not Claims authentication. the first time I created the Web App, everything went great. Ever since
    the first time, I get an error and the web app creation does not complete. Just as an FYI, I can create a new Web App using Central Admin, but that creates it with Claims authentication which I cannot use.
    Here is my PowerShell script
    New-SPWebApplication -Name "SharePoint2013 (82)" -ApplicationPool "SharePoint2013-82" -AuthenticationMethod "NTLM" -ApplicationPoolAccount (Get-SPManagedAccount "GLOBAL\SP_AppPool") -Port 82 -URL "http://sharepoint2013"
    Here is the error I get in PowerShell
    New-SPWebApplication : An update conflict has occurred, and you must re-try this action. The object SPWebApplication
    Name=SharePoint2013 (82) was updated by DOMAIN\SP_Administrator, in the powershell (13020) process, on machine SPWEB01.
     View the tracing log for more information about the conflict.
    At line:1 char:1
    + New-SPWebApplication -Name "SharePoint2013 (82)" -ApplicationPool "SharePoint201 ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidData: (Microsoft.Share...PWebApplication:SPCmdletNewSPWebApplication) [New-SPWebA
       pplication], SPUpdatedConcurrencyException
        + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPWebApplicationUnknown SQL Exception 547 occurred. Additional error information from SQL Server is included below.
    The DELETE statement conflicted with the REFERENCE constraint "FK_Dependencies1_Objects". The conflict occurred in database "SP_Config", table "dbo.Dependencies", column 'ObjectId'.
    The statement has been terminated.
    I also get  this error from the trace
    Unknown SQL Exception 547 occurred. Additional error information from SQL Server is included below.
    The DELETE statement conflicted with the REFERENCE constraint "FK_Dependencies1_Objects". The conflict occurred in database "SP_Config", table "dbo.Dependencies", column 'ObjectId'.
    The statement has been terminated.
    I have tried to resolve the issue by performing the following:
    Delete the partially created web app using Central Admin or PowerShell
    Clear the file system cache on all servers in the farm (Stopping wss timer job, Saving Copy of the
    cache.ini file from C:\Documents and Settings\All Users\application data\Microsoft\SharePoint\Config folder. Clearing all the .xml files from the location except cache.ini but Edit the cache.ini to have value 1. Starting the timer job)
    Use Products Configuration Wizard (no detach, just 'upgrade') - People have said to select the option to disconnect the server from server farm and revert back again at next step (so you actually do not disconnect), but there is no next step that I have
    encountered. You select 'disconnect', hit next, and the server disconnects.
    Restart all the servers in the farm
    Run the PowerShell script again...still get the same error
    I have tried it on multiple servers in the farm too.

    Hi Michael,
    I found a similar thread for you reference:
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/9ccef7bf-87a9-4849-b086-4db2d898f1d7/cannot-create-new-web-application-wss-30?forum=sharepointadminlegacy 
    If it doesn’t work, sometimes the issue might be caused by long time keep powershell opening and generating some cache files. So please try re-opening the powershell and test the issue again.
    http://blogs.msdn.com/b/ronalg/archive/2011/06/24/mount-spcontentdatabase-errors-on-2nd-and-subsequent-attempts-of-same-database.aspx?Redirected=true 
    http://www.oakwoodinsights.com/sharepoint-powershell-surprise/
    In addition, here is an article which explains the possible reason of the SQL error:
    http://technet.microsoft.com/en-us/library/ee513056(v=office.14).aspx
    Regards,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected] .
    Rebecca Tu
    TechNet Community Support

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Update conflict resoltion ORA-01403: no data found

    I have set up multimaster replication environment with two database and I have implemented Update conflict resolution on a table using DISCARD method(Oracle provided) as below.
    Some how it is not able to resolve the conflict and I am getting erro ORA-01403: no data found.
    On MDS(M1) I run the follwing SQL
    update menu_code
    set ipp_uid = 20
    where id = 4;
    commit;
    on master(M2) database in table menu_code row with ID=4 doesn't exsit.
    When I apply deffred transaction that gnerated by above SQL
    I get ORA-01403: no data found and ofcourse trnsaction doesn't apply to db and goes to deferror. Since I am using DISCARD method
    it should be resolved and not to gnerate error message.
    Here is detail info.
    Table name: menu_code
    --creating column group
    BEGIN
    DBMS_REPCAT.MAKE_COLUMN_GROUP (
    sname => 'SYNAPSE',
    oname => 'MENU_CODE',
    column_group => 'MENU_CODE_CG1',
    list_of_column_names => 'id,ipp_uid');
    END;
    -- adding update conflict resolution
    BEGIN
    DBMS_REPCAT.ADD_UPDATE_RESOLUTION (
    sname => 'SYNAPSE',
    oname => 'MENU_CODE',
    column_group => 'MENU_CODE_CG1',
    sequence_no => 1,
    method => 'DISCARD',
    parameter_column_name => 'id,ipp_uid');
    END;
    --regenerating support for the table.
    BEGIN
    DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
    sname => 'SYNAPSE',
    oname => 'MENU_CODE',
    type => 'TABLE',
    min_communication => TRUE);
    END;
    Thanks.
    Pravin

    You are absolutely right. Now we decided not to write any conflict resolution routine at all.
    Non MDS database is read-only till fail over. After fail over non MDS (other master) will allow insert/update operation.
    To fail back to MDS we will write our procedure/function, out side of Oracle conflict resolution( both database will be in Read only mode during synchronizing).
    We will delete all deferred transaction/error form the queue and once data transfer is complete, again MDS becomes primary database to use.
    Oracle conflict resolutions are to complicated and has lots of overhead and maintenance.
    Thanks.
    Pravin

  • Oracle Critical Patch Update Advisory - January 2013

    Hi all,
    11.2.0.3
    Oracle Critical Patch Update - January 2013
    I am confused about CPU, PSU, SPU, USP. I am reading the docs and it seems they are same thing being rambled?
    Which of them is the most important to apply?
    Thanks,
    PetraK

    Thanks gotti and fahd
    I already ask the support and he gave me docs to follow. But the more I am confused with lots of added process.
    Can you validate if the process is correct? My Goal is just to comply with our IT Sec Auditor to apply the latest patch. The auditor does not even know what is the name of the patch as long as it the current updates. So I think he is referring to PSU or SPU or CPU.
    This is the process I got from support:
    1 Appy Oracle Database Patch Set Update (11.2.0.3.8)
        1.1 Download and install latest RDA and run and send the output.
        1.2 Download and install latest OPatch and list the current inventory.
        1.3 Check java version
        1.4 Check and Resolve PSU conflict
        1.5 Backup the current OracleHome as "root"
        1.6 Stop Listener, EM, and all DB Instances
        1.7 Apply PSU
        1.8 Resolve errors if any.
    End.
    So It seems I only need to apply PSU? which is 11.2.0.3.8?

  • Execution of Row level trigger in Oracle Streams.

    Hi All,
    Oracle Database version : 9.2.0.4 on windows NT/2000 environment.
    We managed to install,configure oracle stream technologies.
    Oracle Stream seems to be working fine for replication of DML & DDL changes from source database to target database.
    Following is detail at source end.
    Source Sid = acc
    Source Schema = stream
    Source Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    Streamadmin user = strmadmin
    Following is detail at target end.
    Target Sid = fin
    Target Schema = stream
    Target Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    TRAN_DATE                    NULL DATE DEFAULT SYSDATE
    I checked on insert/update/delete of rows into dept table at source database, changes are correctly replicated to target table dept.
    I wrote a simple trigger which is as follows on dept table at target database.
    create or replace trigger dept_upd_del
    before delete or update of dname,loc on stream.dept
    for each row
    begin
    dbms_output.put_line('Inside Trigger');
    if updating then
    dbms_output.put_line('Update');     
    insert into stream.dept_change values (:old.deptno,'U',sysdate);
    end if;
    if deleting then
    dbms_output.put_line('Delete');
    insert into stream.dept_change values (:old.deptno,'D',sysdate);
    end if;
    end;
    I expect this trigger to get executed whenever changes occurs into dept table at target database whenever dml changes are propagated from source to target table. However i found that the above trigger is not executed at all.
    I was further surprised, since incase i update/delete rows from target table dept the above trigger is executing correctly.
    Can someone please let me know about this?
    I believe stream technology is using INSERT / UPDATE & DELETE statement when changes are applied at target table but this doesn't seems to be the case.
    Thanks in Advance.
    Regards,
    Vidyanand

    The trigger at the destination will not fire because it already has at the source site. Read about that in the streams documentation on page 4-25. To change the "fire once" property of the trigger, use the procedure SET_TRIGGER_FIRING_PROPERTY in the DBMS_DDL package.
    Hope this helps.
    Claudine

  • Register Oracle Streams with OID

    I'm trying to setup an Oracle Streams environment that registers queues with OID. I have the db registered, and set global_toppic_enabled=true. DB is in archivemode. When I try to setup a queue with:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    I get
    Error report:
    ORA-00600: internal error code, arguments: [kcbgtcr_5], [52583], [4], [0], [], [], [], []
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 739
    ORA-06512: at line 2
    00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
    *Cause:    This is the generic internal error number for Oracle program
    exceptions.     This indicates that a process has encountered an
    exceptional condition.
    *Action:   Report as a bug - the first argument is the internal error number
    Has anyone run into this? I searched metalink, but couldn't find anything. I'm running 10.2.0.1 on Windows 2K.
    Thanks in advance.

    I never use streams with OID but check Bug 4996133 - OERI[kcbgtcr_5] updating an IOT in RAC environment.
    I would consider to upgrade db to 10.2.0.3 - it is first really stable release of 10gR2
    Regards,
    Serge

  • Oracle Streams - First Load

    Hi,
    I have an Oracle Streams environment working well. I replicate and transform the data.
    My problem is:
    Initially I have a source database with 3 million of records and my destination database with no records.
    I have to equalize source and destination database before start to synchronize it.
    Do you know how can I replicate (and transform) this data for my first load?
    It's not only to copy all the data, it's to copy and transform.
    Is it possible to use the same transformation process for this first load?
    If I didn't transform the data I would use the Data Pump tool(e.g.). But I have to transform the data for my destination database.
    Thanks

    I am in DAC and trying to run the Informatica ETL for one of the prebuilt execution plan (HR- Oracle R12). I built the project plan and ran it. I got failed status for all of the ETL steps (Starting from the first one 'Load Row into Run table'). I have attached the error log for the Load Row into Run table ibelow.
    I took a closer look at all the steps, and it seem like they all have this common "fail parent if this task fails" error message.
    Error log for
    pmcmd startworkflow -u Administrator -p **** -s SOBI:4006 -f SILOS -lpf C:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
    Status Desc : Failed
    WorkFlowMessage :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.1.1 SP5], build [186.0822], Windows 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu May 07 14:46:04 2009
    Connected to Integration Service at [SOBI:4006]
    Folder: [SILOS]
    Workflow: [SIL_InsertRowInRunTable] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SIL_InsertRowInRunTable] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_InsertRowInRunTable] will be failed.]
    Start time: [Thu May 07 14:45:43 2009]
    End time: [Thu May 07 14:45:47 2009]
    Workflow log file: [C:\Informatica\PowerCenter8.1.1\server\infa_shared\WorkflowLogs\SIL_InsertRowInRunTable.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Integration Service: [Oracle_BI_DW_Base_Integration_Service]
    Disconnecting from Integration Service
    Completed at Thu May 07 14:46:04 2009
    =====================================
    ERROR OUTPUT
    =====================================
    Error Message : Unknown reason for error code 36331
    ErrorCode : 36331
    If you have any input on how to fix this issue, please let me know.

Maybe you are looking for