Datagurad selective replication

Hi,
Using data guard is it possible to replicate only a particular set of tables/schemas? and not the whole database
Regards!

Hello again;
I might not use a Standby for this. But if you want to besides a logical Standby there's also Active Data Guard.
Depending upon the size of your reporting database materialized views might be an option.
If you want to use the Primary as a source you could clone it using RMAN Duplicate and then use a Database link for updates.
If cost isn't too much of an option I'd probably use Active Data Guard as a reader database.
It a tough question without knowing what extra data you need besides the Primary data.
If you use materialized views you can cut down the size by not refreshing columns not used by reporting. You can also get rid of whole tables not used by reporting. Plus you can add extras without Data Guard conflicts. If the Primary isn't too large this works very well.
Best Regards
mseberg

Similar Messages

  • Selective Replication

    Hi,
    Is there a way to replicate a selected datasource? One that has not been replicated before?
    Thanks in advance.
    Rgds
    Anand

    Hi
    Please refer the help link below
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/5fc0680a876b7de10000000a422035/frameset.htm
    Regards
    Gaurav

  • SQL Server 2008 R2 - Adding Replication Components to an Existing Installation

    Hi All
    I have a SQL Server 2008 R2 instance that I need to install Replication components on. 
    I've been going through the appropriate wizard and selecting to upgrade an existing instance, the wizard recognizes the SQL Server that we have installed so that one is selected. On the next screen, I'd expect to see the feature selector with everything
    we currently have installed selected and greyed out, however nothing is selected (except for BOL).
    So in order to select Replication Components, I'd have to also select Database Engine, suggesting that a new SQL Instance would be installed. Not wanting to do this, I'm having to abort.
    I've tried a number of different ISOs now (one I found lying around on the server), but I haven't had any luck. My only option now seems to be to recreate the instance entirely, but that being a last resort, I was wondering if anyone may have had some experience
    with this problem?
    I am running the installer with the highest level of Windows and SQL access, if that helps.
    Thank you

    Just to confirm when you try to install you select the option "Add features to an existing Instance of SQL" and below you see the instance name and you *Dont* see replication.
    Now when you click next in the feature selection page you see replication as greyed out is that correct?
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com
    To confirm, the steps I'm taking are as follows:
    Once all the Setup Support files have been installed, and the Setup Support rules have been checked (with no errors)
    - At the Installation Type screen I select Add features to an existing instance of SQL Server 2008 R2. My server appears in the drop down as the only instance on this server. I see the list of installed instances too, and can confirm that Replication does
    not appear in the Features column. I click next.
    - At the Feature selection screen, the only items that are selected are Books Online, SQL Client Connectivity and Microsoft Sync Framework. No other features are selected and greyed out, where I would expect Database Engine Services, Analysis Services and
    Reporting Services (among others) to be selected. Selecting SQL Server Replication at this stage results in Database Engine Services also being ticked, suggesting the wizard will add a new instance on this machine.
    The only peculiarity that I've noticed is with the version number reported on the Installation Type screen, which shows 10.51.2500. Not the same as the version that is reported if I inspect object browser or I user the @@Version command. In fact, I believe
    this version relates to MDS instead? This being said, our other production server reports the same version here, and I was able to install replication components here with no problem.
    I hope this helps, I don't know if I can attach some images to my post to make it more clear? It's quite a difficult and unusual problem to explain!

  • SQL Server 2014 Replication: Peer-to-peer replication

    SQL Server 2014 Replication wizard: Peer-to-peer replication -> Agent Security Settings setup only Log Reader Agent Security Settings are available.
    After I selected replication type and articles only Log Reader Agent Status was available, the Snapshot Agent Reader Status displays text:
      "A Snapshot Agent job has not been created for this publication."
    Another issue with replication:
      "Peer-to-peer publications only support a '@sync_type' parameter value of 'replication support only', 'initialize with backup' or 'initialize from lsn'.
    The subscription could not be found."
    Search how to resolve issues in SQL Server 2014?

    Please check this similar post ..
    http://blogs.msdn.com/b/sqljourney/archive/2013/10/01/an-interesting-issue-with-peer-to-peer-replication.aspx
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/15701595-f5b1-4a10-b4aa-c56a94d64785/peertopeer-publications-only-support-a-synctype-parameter-value-of-replication-support-only?forum=sqlreplication
    Raju Rasagounder Sr MSSQL DBA

  • I need help on Config Master Master Replication

    Hi :
    I got fail on config Master-Master Replica on Directory Server 5.2sp4
    . Can anyone give me some on configuring MMR.
    Error message I got as follows:
    [25/Jul/2006:17:11:17 +0800] - import userRoot: Processed 489736 entries -- average rate 191.8/sec, recent rate 104.7/sec, hit ratio 97%
    [25/Jul/2006:17:11:38 +0800] - import userRoot: Processed 492001 entries -- average rate 191.1/sec, recent rate 105.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:00 +0800] - import userRoot: Processed 494072 entries -- average rate 190.3/sec, recent rate 100.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:22 +0800] - import userRoot: Processed 496657 entries -- average rate 189.7/sec, recent rate 105.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:43 +0800] - import userRoot: Processed 499113 entries -- average rate 189.1/sec, recent rate 114.6/sec, hit ratio 97%
    [25/Jul/2006:17:13:05 +0800] - import userRoot: Processed 501254 entries -- average rate 188.4/sec, recent rate 106.9/sec, hit ratio 97%
    [25/Jul/2006:17:13:08 +0800] - import userRoot: Workers finished; cleaning up...
    [25/Jul/2006:17:13:25 +0800] - import userRoot: Workers cleaned up.
    [25/Jul/2006:17:13:25 +0800] - import userRoot: Indexing complete. Post-processing...
    [25/Jul/2006:17:13:26 +0800] - import userRoot: Flushing caches...
    [25/Jul/2006:17:13:26 +0800] - import userRoot: Closing files...
    [25/Jul/2006:17:13:35 +0800] - import userRoot: Import complete. Processed 501537 entries in 2691 seconds. (186.38 entries/sec)
    [25/Jul/2006:17:13:35 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - multimaster_be_state_change: replica o=tfn.net.tw is coming online; enabling replication
    [25/Jul/2006:17:13:35 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - replica_reload_ruv: Warning: new data for replica o=tfn.net.tw does not match the data in the changelog.
    Recreating the changelog file. This could affect replication with replica's consumers in which case the consumers should be reinitialized.
    [25/Jul/2006:17:13:36 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - This supplier for replica o=tfn.net.tw will immediately start accepting client updates
    [25/Jul/2006:17:13:36 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - Replica (o=tfn.net.tw) has been initialized by total protocol as full replica
    [25/Jul/2006:17:13:36 +0800] - WARNING<10276> - Incremental Protocol - conn=-1 op=-1 msgId=-1 - Replication inconsistency Consumer Replica "ldap1.tfn.net.tw:389/o=tfn.net.tw" has a different data version. It may have not been initialized yet.
    The procedure I did as follows:
    Two LDAP , LDAP1, LDAP2
    1. Install LDAP1 and LDAP2
    2. Migrate Data from old LDAP Server to New LDAP1
    then
    On LDAP1:
    3. Enable Change Log and some parameter
    4. Enable Replication, select "Master"
    5. Create Replication Agreement to LDAP2
    and then
    On LDAP2:
    6. Enable Change Log and some parameter
    7. Enable Replication, select "Master"
    8. Create Replication Agreement to LDAP1
    Now Go back LDAP1
    9. select replication agreement and start initial LDAP2 now.
    10. wait for finished, LDAP1will receive initialiation completed message on startconsole.
    but On LDAP2
    11. check for error log. I got errors.
    [25/Jul/2006:17:13:36 +0800] - WARNING<10276> - Incremental Protocol - conn=-1 op=-1 msgId=-1 - Replication inconsistency Consumer Replica "ldap1.tfn.net.tw:389/o=tfn.net.tw" has a different data version. It may have not been initialized yet.
    Can anyone point out what steps I was wrong ?
    ps: I can't also find the button on procedure 3 mentioned on admin manual.==>
    "To Begin Accepting Updates Through the Console"
    Follow these steps to explicitly allow update operations after the initialization of a multi-master replica:
    3. Click the button to the right of the message to start accepting update
    operations immediately.
    Victor

    1. answer for your last question:
    data server -- configuration -- Data -- your suffix -- Replication
    In right panel, click on the SIR agreement -- click on "Action" on your right low corner -- choose "Send Updates now ..."
    2. answer for your replication question:
    Usually I will move step 2 down before step 9. That means setting up replication first, then feeding your master server using your ldif file: to ldap1, then init ldap2 using data from ldap1, either through Console or through command (if using command, have to use db2ldif -r to dump data from ldap1 and use ldif2db to init ldap2).
    If anytime in your log, you see "different verison of data", try to init again.

  • "x" in Site Hierarchy but dont have error

    Hello guys!
    In my SCCM 2012 R2 the "Monitoring/Site Hierarchy" show error in communication between primary and secondary sites and when i open the status error messages dont have any error message. See:
    Atenciosamente Julio Araujo

    Do you have any errors in your Site and Component Status from the
    System Status in the left panel? 
    Or as Torsten said, you can click on Database Replication and right click on one of the broken link and select
    Replication Link Analyzer. 
    Nick Pilon - Blog: System Center Dudes

  • Selected Column insert, delete and updation at replication end

    Hello,
    In GoldenGate, is there a way to get a timestamp column at target table to get updated for some selected columns from source table only. For eg.
    At source - Table Temp_Source , columns - Emp_Id (PK), Emp_Name, Department, Address
    At target - Table Temp_Target, columns - Emp_Id (PK), Emp_Name, Department, Last_Update
    The Last_Update column takes timestamp for last updation made at source table which is replicated at target end.
    Now I just want the changes made in EMP_Id, EMP_Name , Department columns to be replicated at target table and also the Last_Update to get updated for changes made in these columns only. The changes made in Address column should not affect Last_Update at target end.
    The extract or replication script for this should work for insert, update and delete scenarios for specified columns. Using COLS creates problem in insertion as it Abends the replication process.
    Is there any way I can achieve this functionality??

    At target end I have written the following code in Replication process -
    GetUpdates
    IgnoreInserts
    IgnoreDeletes
    Map <source table>, target <target table>, COLMAP (USEDEFAULTS, LAST_UPDATE = @IF((EMP_ID <> BEFORE.EMP_ID OR EMP_NAME <> BEFORE.EMP_NAME OR DEPT <> BEFORE.DEPT), @getenv("GGHEADER", "COMMITTIMESTAMP"),REFRESH_TIMESTAMP));
    But this code entertains only the Primary Key changes i.e. EMP_ID, I want the Last_update to get updated for changes in EMP_NAME and DEPT even but not for Address column change at source end.

  • How to enable multi-statement replication like select into in SAP Replication server

    Hi All,
    Currently I am worling on replication of non logged operation using SAP Replication Server.My source and target databases both are Sybase ASE 15.7. I created a normal stored procedure having non logged operation like :
    create procedure proc1
    as
    select * into tab2 from tab1
    I have created database replication definition using following command :
    create database replication definition def1
    with primary at dewdfgwp01694.src
    replicate DDL
    replicate functions
    replicate transactions
    replicate tables
    and created subscription as well
    After marking the procedure using sp_setrepproc proc1,'function', I started the repagent (sp_start_rep_agent src)
    But after marking the procedure I am unable to execute the procedure and having the error :
    SELECT INTO command not allowed within multi statement transactions
    Sybase error code=226
    Can anyone please guide me in this situation
    FYI : I have executed all three commands in primary database :
    sp_dboption src,'select into/bulkcopy/pllsort',true;
    sp_dboption src,'ddl in tran',true;
    sp_dboption src,'full logging for all',true

    I am getting the error in primary database(Sybase ASE console) as well as in repserver .
    This error is occurring after the marking of the procedure in the primary database for replicating.
    And after getting this error i am unable to replicate any other table or procedure(seems the DSI thread is going down in repserver)
    the error in repserver is given below :
    T. 2014/09/20 16:58:03. (27): Last command(s) to 'server_name.trg':
    T. 2014/09/20 16:58:03. (27): 'begin transaction  [0a] exec proc1  '
    E. 2014/09/20 16:58:03. ERROR #1028 DSI EXEC(103(1) 'server_name.trg) - dsiqmint.c(4710)
    Message from server: Message: 226, State 1, Severity 16 -- 'SELECT INTO command not allowed within multi-statement transaction.
    H. 2014/09/20 16:58:03. THREAD FATAL ERROR #5049 DSI EXEC(103(1) server_name.trg) - dsiqmint.c(4723)
    The DSI thread for database 'server_name.trg' is being shutdown. DSI received data server error #226 which is mapped to STOP_REPLICATION. See logged data server errors for more information. The data server error was caused by output command #0 mapped from input command #0 of the failed transaction.
    I. 2014/09/20 16:58:03. The DSI thread for database 'server_name.trg' is shutdown.
    I. 2014/09/20 18:07:48. Replication Agent for server_name.src connected in passthru mode.

  • Selective Product Category Replication

    Hello everybody,
    I am in SRM5 classic mode. I would like to replicate only selective material groups (product categories) to SRM from the R3 backend system. Is there any way this can be done?
    Thanks
    Venkat

    Hello Venkat,
    Yes , it is possible to replicate the selected product categories form R/3.
    This can be achieved as follows. when you are defining the middleware settings for the customizing objects using the trasaction R3AC3. Here you will set the filter(3rd tab) based on the material group and activate the object adapter.
    Hope this information will be helpful to you.
    Regards,
    Mani

  • Selective Data Replication

    Hi,
    Scenario:
    A primary DB receives data at about 200,000 transactions a minute. These transactions involve all of CRUD, then there is a purge algorithm that runs every so often to purge data. There is a Secondary DB or an Archive DB that we plan to have which needs to have all the data that enters the Primary DB replicated, EXCEPT any deletes performed by the purge algorithm.
    On researching a little, the choices that are in front of us are
    1. Data Guard
    2. Advanced Replication
    3. Oracle Streams
    4. Oracle Warehouse Builder
    We use 'Oracle 10g Enterprise Edition,' the application is pure Java.
    Could some point us into the right direction, which one of these or if there is any other technology that we could use to perform our task at hand.
    Thank You

    First, just for clarity, do you really need synchronous replication? Or asynchronous replication? If the latter, what sort of latency is acceptable in your "close to real-time" scenario? Truly synchronous replication puts substantially more load on the source system than does asynchronous replication.
    Assuming we're talking about asynchronous replication, that will put some load on the source database. How much load is really hard to tell because it depends on things like whether the server is I/O bound or CPU bound, whether there are spare CPU cycles, etc. Generally, Streams is designed to minimize the load on the primary, but it does impose some load.
    With asynchronous distributed Hotlog CDC
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/cdc.htm#sthref946
    you could offload some of the work that CDC would normally be doing on the source database to a third machine. I would expect that this would leave the source database doing roughly the same amount of work that it would be doing in a basic Streams environment, but I've not benchmarked that assumption to be certain.
    2) In general, yes, I'd suggest DataGuard if you need High Availability. You could potentially roll your own failover using Streams, but that's generally going to be a pain to write/ test/ maintain/ implement in a disaster.
    3) There is a thread on askTom on paging through results
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:127412348064
    In general, paging in SQL works well when you expect that the users are going to be interested only in the first couple pages of the result (i.e. no one is interested in page 100 of their Google query). If you expect someone to eventually go through the entire result, you're probably better paging on the client.
    Justin

  • In-memopry replication works selectively

              HI,
              I am trying to configure in-memory replication on a cluster with 4 weblogic instances
              running on 2 physical machine(Machine A and Machine B). However, replica's are
              created for session located on Machine A but not for sessions on Machine B.
              I have not specified any replication groups as of now. Weblogic.xml session param
              is set to "replicated".
              Any idea's what iam missing.
              Thanks for your help in advance.
              

    Run "Replication Manager" and click on "errors" for that node. Other than that you can try to view DEF$_ERROR and make something from the cryptic error listed there.
    And it looks like session_id is session number along with serial number.

  • Replication selective columns using oracle streams

    Hi
    I am basically replacing an mview by streams.
    At source I have about 45 columns in a table. But at target I just need to replicate 7 columns. I am wondering if there is a way to configure streams such that a list of column is passed.
    I know there are rule-based transformation available DBMS_STREAMS_ADM.DELETE_COLUMN. But wondering if there is an easy way to do it with out writing transformations for each column.
    Thanks
    N

    Yes the major reasons to move from mview's are
    - add column, delete column requires complete refresh of mviews at target, which runs for hours
    - Any structural changes to table at source requires the entire application to be down as the source table very active. And so Cannot afford a down time to add , delete columns at source, so we use OTR (Online table redefinition) to do so, and OTR cannot be performed on tables that have mviews based on them.
    Oracle Support recommended using Oracle Streams, but it also has its limitations with very few workaround available.
    I know for streams too there is a limitation of OTR, but we can reinstantiate table after the otr is performed on the source , with add_column or delete_column transformations., and get the transformation going from where it stopped. ( Avoiding complete recreating of table at Target).

  • Error while creating MV replication group object

    Hi,
    I am getting error while creating replication group object. I tried to create using OEM and SQLPlus
    OEM error
    This error while creating M.V. rep. group object
    There is a table or view named SCOTT.EMP.
    It must be dropped before a materialized view can be created.
    In SQLPLUS
    SQL> CONNECT MVIEWADMIN/MVIEWADMIN@SWEET
    Connected.
    SQL>
    SQL> BEGIN
    2 DBMS_REPCAT.CREATE_MVIEW_REPOBJECT (
    3 gname => 'SCOTT',
    4 sname => 'KARTHIK',
    5 oname => 'emp_mv',
    6 type => 'SNAPSHOT',
    7 min_communication => TRUE);
    8 END;
    9 /
    BEGIN
    ERROR at line 1:
    ORA-23306: schema KARTHIK does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 2
    Please not already I have created KARTHIK schema.

    Arthik,
    I think I know what may have happened.
    As I can see you are trying to create support for an updateable materialized view.
    You have to make sure the name of the schema that owns the materialized view is the same as the schema owner of the master table (at master site).
    From the code you have shown, I bet the owner of table EMP is SCOTT.
    From the other hand, you want to create materialized view EMP_MV under schema KARTHIK that refers to table SCOTT.EMP at master site.
    According to the documentation, the schema name used in DBMS_REPCAT.CREATE_MVIEW_REPOBJECT must be same as the schema that owns the master table.
    Please check the documentation at the link below
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14227/rarrcatpac.htm#i109228
    I tried to reproduce your example in my environment, and I got exactly the same error which actually confirms my assumption that the reason for the error is the fact that you tried to create the materialized view in a schema with different name than the one where master table exists.
    I'll skip some of the steps that I used to create the replication environment.
    I have two databases, DB1.world and DB2.world
    On DB2.world I will generate replication support for table EMP which belongs to user SCOTT
    SQL> conn scott/*****@DB2.world
    Connected.
    SQL>create materialized view log on EMP with primary key;
    Materialized view log created.
    SQL>
    SQL>conn repadmin/*****@DB2.world
    Connected.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPGROUP(
      3         gname => 'GROUPA',
      4         qualifier => '',
      5         group_comment => '');
      6*   END;
    PL/SQL procedure successfully completed.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPOBJECT(
      3         gname => 'GROUPA',
      4         type => 'TABLE',
      5         oname => 'EMP',
      6         sname => 'SCOTT',
      7         copy_rows => TRUE,
      8         use_existing_object => TRUE);
      9*   END;
    10  /
    PL/SQL procedure successfully completed.
    SQL> BEGIN
      2       DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT(
      3         sname => 'SCOTT',
      4         oname => 'EMP',
      5         type => 'TABLE',
      6         min_communication => TRUE);
      7    END;
      8  /
    PL/SQL procedure successfully completed.
    SQL>execute DBMS_REPCAT.RESUME_MASTER_ACTIVITY(gname => 'GROUPA');
    PL/SQL procedure successfully completed.
    SQL> select status from dba_repgroup;
    STATUS                                                                         
    NORMAL                                                                          Now let's create updateable materialized view at DB1. Before that I want to let you know that I created one sample in DB1 user named MYUSER. MVIEWADMIN is Materialized View administrator.
    SQL>conn mviewadmin/****@DB1.world
    Connected.
    SQL>   BEGIN
      2       DBMS_REFRESH.MAKE(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => '',
      5         next_date => SYSDATE,
      6         interval => '/*1:Hr*/ sysdate + 1/24',
      7         push_deferred_rpc => TRUE,
      8         refresh_after_errors => TRUE,
      9         parallelism => 1);
    10    END;
    11  /
    PL/SQL procedure successfully completed.
    SQL>   BEGIN
      3       DBMS_REPCAT.CREATE_SNAPSHOT_REPGROUP(
      5         gname => 'GROUPA',
      7         master => 'DB2.wolrd',
      9         propagation_mode => 'ASYNCHRONOUS');
    11    END;
    12  /
    PL/SQL procedure successfully completed.
    SQL>conn myuser/*****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW MYUSER.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'MYUSER.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'MYUSER',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
      9  /
      BEGIN
    ERROR at line 1:
    ORA-23306: schema MYUSER does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 3 I reproduced exactly the same error message.
    So the problem is clearly in the schema name that owns the materialized view.
    Now lets see if what would happen if I create the MV under schema SCOTT which has the same name as the schema on DB2.world where the master table exists.
    SQL>conn scott/****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW SCOTT.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'SCOTT.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'SCOTT',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
    PL/SQL procedure successfully completed.As you can see everything works fine when the name of the schema owner of the MV at DB1.world is the same as the schema owner of the master table at DB2.world .
    -- Mihajlo
    Message was edited by:
    tekicora

  • Snapshot replication slow during purge of master table

    I have basic snapshot/materialized view replication of a big table (around 6 million rows).
    The problem that I run into is that when I run a purge of the master table at the master site (delete dml), the snapshot refresh time becomes slower. After the purge the snapshot refresh time goes back to the normal time interval.
    I had thought that the snapshot does a simple select so any exclusive lock on the table should not hinder the performance.
    Has anyone seen this problem before and if so what has been the workaround?
    The master site and the snapshot site both are 8.1.7.4 and are both unix tru64.
    I don't know if this has any relevence but the master database is rule based while the snapshot site is cost based.
    thanks in advance

    Hello Alan,
    Your problem is, to know inside a table-trigger if the actual DML was caused
    by a replication or a normal local DML.
    One way (I'm practising) to solve (in Oracle 8.1.7) this is the following:
    You can use in the trigger code the functions DBMS_SNAPSHOT.I_AM_A_REFRESH(),
    DBMS_REPUTIL.REPLICATION_IS_ON() and DBMS_REPUTIL.FROM_REMOTE()
    (For details see oracle documentation library)
    For example: a trigger (before insert of each row) at the master side
    on a table which is an updatable snapshot:
    DECLARE
         site_x VARCHAR2(128) := DBMS_REPUTIL.GLOBAL_NAME;
         timestamp_x      DATE;
         value_time_diff     NUMBER;
    BEGIN
    IF (NOT (DBMS_SNAPSHOT.I_AM_A_REFRESH) AND DBMS_REPUTIL.REPLICATION_IS_ON) THEN
    IF NOT DBMS_REPUTIL.FROM_REMOTE THEN
    IF inserting THEN
         :new.info_text := 'Hello table; this entry was caused by local DML';
    END IF;
    END IF;
    END IF;
    END;
    By the way: I've got here at work nearly the same configuration, now in production since a year.
    Kind regards
    Steffen Rvckel

  • Message filtering in propagation process (stream replication environment)

    Hi!
    We have fine configured stream replication in star topology:
    ORCL2 <=> ORCL1 <=> ORCL3
    Where the ORCL1 is "headquarters" and there is no message flow between ORCL2 and ORCL3.
    For some reason we want to filter messages in propagation processes, e.g. DML captured on ORCL1 should be replicated only to ORCL2 or only to ORCL3. There is one propagation process for each "satellite" database.
    To solve this problem I have written function:
    FUNCTION Replicate_Lcr (
    p_lcr IN SYS.lcr$_row_record)
    RETURN VARCHAR2 IS
    which will be making a decision whether to pass the message (return 'Y') or not (return 'N').
    But there is problem: rule is evaluated and function is executed (there is insert into 'stream_log_lcr' table) but value of the expression seems to be 'FALSE' and message (LCR) is not beeing sent to ORCL2 (or to ORCL3).
    When I remove function 'Replicate_Lcr' from propagation rule condition then every message captured by capture process on ORCL1 reaches destination database (ORCL2 or ORCL3).
    The second observation is that, if I run the same code on ORCL2 or ORCL3 then everything seems to be OK: there is insert into 'stream_log_lcr' table and DML captured on ORCL2 (or ORCL3) appears in ORCL1 ("headquarters").
    I suppose that this could be problem with other version of database on "headquarters" (ORCL1) or configuration issues.
    I will appreciate every suggestion.
    Databases:
    ORCL1: 64-bit Windows, ver. 10.2.0.4.0, Windows 2008 server
    ORCL2: 32-bit Windows, ver. 10.2.0.1.0, Windows XP
    ORCL3: 32-bit Windows, ver. 10.2.0.1.0, Windows XP
    SQL code run on ORCL1:
    CREATE TABLE stream_log_lcr (
    data DATE NOT NULL,
    msg SYS.lcr$_row_record NOT NULL
    -- simplified, always return 'Y'
    CREATE OR REPLACE FUNCTION Replicate_Lcr (
    p_lcr IN SYS.lcr$_row_record)
    RETURN VARCHAR2 IS
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    IF p_lcr IS NOT NULL
    THEN
    INSERT INTO stream_log_lcr
    VALUES (SYSDATE,
    p_lcr);
    COMMIT;
    END IF;
    RETURN 'Y';
    END;
    -- create propagation process with above function in rule condition
    BEGIN
    DBMS_STREAMS_ADM.add_schema_propagation_rules
    (schema_name => 'data_schema',
    streams_name => 'primary_to_secondary2',
    source_queue_name => 'strmadmin.capture_primary',
    destination_queue_name => 'strmadmin.from_primary@ORCL2',
    include_dml => TRUE,
    include_ddl => TRUE,
    source_database => 'ORCL1',
    and_condition => ' strmadmin.Replicate_Lcr(:dml) = ''Y'' ',
    inclusion_rule => TRUE,
    queue_to_queue => TRUE);
    END;
    -- check if function 'Replicate_Lcr' was evoked:
    SELECT * FROM stream_log_lcr ORDER BY data;

    hi porzer,
    In Propagation process ( source) also 0 errors. But in apply ( dest ) under statistics under server status is displaying as IDLE. And Coordinator status is APPLYING. In Capture (source) no error. In Applying ( dest ) no error. what else i can do please?
    up to now what i did i am telling:
    I had 2 databases had one table same table. one database i changed the mode to ARCHIVELOG mode. Another database is in NOARCHIVELOG mode only. In first database setup streams i run. And i run manually 2 .sql files and one .dat file like this :
    SQL>@e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_SETUP.sql
    SQL>host e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_exportimport.bat
    SQL>@e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_startup.sql
    Any thing else i can do? i didn't have metalink registration.i hope i am not boring you.
    Thanks in advance.

Maybe you are looking for

  • SAX Parser XML Validation Problems

    Hi, I’m having problems getting an xml document to validate within Weblogic 8.1. I am trying to parse a document that references both a dtd and xsd. Both the schema and dtd reference need to be substituted so they use local paths. I specify the schem

  • Copy in existing site is now just random characters

    I apologize if this has been asked before...I did a search but couldn't find anything relevant. I just opened up iWeb to update my site, and all of my copy has been replaced with random characters. Here's the weird part: if I copy the gobbledygook an

  • RG1 value update on Sales

    Hi Gurus, After a complete Domestic Sales process, RG1 register should be updated with quantity and value for goods removal from factory . In standard after updating ,extracting and retrieving the final report it is being noticed that value is not ge

  • Why my macbook pro 7.1 does not recognize 2x4 GB of ram (PNY), I already tried 1 by 1 and recognize up to 6? What it's wrong

    Hello, I updated my RAM from 4 to 8, the problem is that does not recognize the 8 GB, I tried one by one, and only recognize 6 GB (one module of 2 and one module of 4) but by the time I install both, it doesn`t work. (I updated the firmware also) 

  • I see a number on the mail icon, but I can't see any mail.

    Just got the new iPad. It seems to have synced to the cloud. The mail icon has an increasing number attached to it, but I can't see any mail. That number is the number of new messages, right? Thanks, Ty