1:N Replication Scenario Approach

Hi,
We plan to split HANA and like to consider the parallel run option whereby we build the new hardware and do a migration/restore from existing Production onto the new box. Then we like to switch on SLT for both the existing and new HANA boxes and run the models in parallel to compare the results before we switch off the old one.
This would mean that we need to setup new SLT triggers for the new server whilst the existing triggers are still running.The current configuration for SLT -- Allow Multiple Usage is not selected.How we can perform this in our landscape?What would be the best approach?

Hi Roy,
look at this note. it describes how to setup 1:n once you already created the config without "multiple usage" flagged.
Best,
Tobias
SAP Note 1898479:
SLT replication: Redefinition of existing DB triggers

Similar Messages

  • Unit 4 Assignment 1, AD Replication Scenario

      what is the recommendations for site-link protocols and replication schedule/frequency as well as the possibility of recommending/justifying redundant links to branch 1 ?

    I have already answered that here: https://social.technet.microsoft.com/Forums/windowsserver/en-US/a0507a50-8e4d-48ab-be8f-1f0bfb401f4a/ad-design-replication-scenario?forum=winserverDS#a0507a50-8e4d-48ab-be8f-1f0bfb401f4a
    Please note that a DC can remain unreachable for a period that should not be longer than your tombstone lifetime period: https://technet.microsoft.com/en-us/library/cc784932%28v=ws.10%29.aspx
    For redundant links, it depends of your requirements, impacts when a site is disconnected and your SLAs. So, I would recommend first to identify your requirements before looking to how the configuration should be.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • I i want to run many-to-one replication scenario

    Hi,
    i want to run many-to-one replication scenario and for that i have created two tables and inserted values on source db and one table on target db in which i want consolidated data from those two tables. Below are the table details,
    Source DB,
    CREATE TABLE GGS.CLIENT_INFO
    ( CLIENT_ID varchar2(10) not null,
    CLIENT_NAME varchar2(50) not null,
    CLIENT_ADD varchar2(50),
    CONSTRAINT CLIENTID_PK PRIMARY KEY (CLIENT_ID)
    CREATE TABLE GGS.ACCOUNT_INFO
    ( ACCOUNT_NO varchar2(15) not null,
    BANK_NAME varchar2(50) not null,
    CLIENT_ID varchar2(10) not null,
    CONSTRAINT ACCTNO_PK PRIMARY KEY (ACCOUNT_NO)
    alter table GGS.ACCOUNT_INFO add CONSTRAINT FK_CLIENTINFO FOREIGN KEY (CLIENT_ID) REFERENCES GGS.CLIENT_INFO(CLIENT_ID);
    Target DB,
    CREATE TABLE GGS1.CLIENT_ACCOUNT_INFO
    ( CLIENT_ID varchar2(10) not null,
    ACCOUNT_NO varchar2(15) not null,
    CLIENT_NAME varchar2(50) not null,
    CLIENT_ADD varchar2(50),
    BANK_NAME varchar2(50) not null,
    CONSTRAINT CLIENT_ACCOUNT_PK PRIMARY KEY (CLIENT_ID, ACCOUNT_NO)
    when i start replicat process, it is giving below error,
    *"Oracle GoldenGate Delivery for Oracle, CLACTDEL.prm: OCI Error ORA-01400: cannot insert NULL into ("GGS1"."CLIENT_ACCOUNT_INFO"."CLIENT_NAME") (status = 1400), SQL <INSERT INTO "GGS1"."CLIENT_ACCOUNT_INFO" ("CLIENT_ID","ACCOUNT_NO") VALUES (:a0,:a1)>."*
    Note: i am inserting two source tables data in one table which at target side using OGG capture-replicate process.
    Please help to resolve above error.
    Regards,
    Shital

    For one, do not use the GoldenGate database user as your source and target schema owner. Why? What happens in a bidirectional setup? To prevent ping-ponging of data, operations performed by the replicat user should be ignored. That's what keeps the applied update on a target being re-applied on the original source, and then being captured and sent to the target, etc.
    Without knowing your setup, what did you do for ADD TRANDATA and supplemental logging at the database level (only needed for the source)? What did you do for initial load and synchronization? What are your parameter files?
    The error shown so far - cannot insert null - applies everywhere in Oracle whenever you try to insert a record with a null value in a column where a NOT NULL constraint is present. You can see that for yourself in a SQL*Plus session and trying the insert. You are inserting two column values, when your own table definition shows you would need at least 4 values (to account for all of the not null constraints).

  • Is schema replicated in multimaster replication scenario?

    hellos
    I am asking basic question. I have set up a multimaster replication (ids5.1sp2) on database c=fi.. this works well.
    however, in the future I know that more objectClasses and attributes will be added to the schema.
    My simple question.. is do I have to amend the 99user.ldif files on BOTH ids5.1 servers or can I rely on the schema being replicated.
    thanks.

    Schema is supposed to replicate in all replication scenario's
    The documentation states
    "The best way to guarantee schema consistency is to make schema modifications on a single master server, even in the case of a multi-master replication environment.
    Schema replication happens automatically. If replication has been configured between a supplier and a consumer, schema replication will happen by default."
    However a few people in this forum have complained that this replication doesn't always work - so be sure to check to make sure the schema does in fact replicate.
    For more information on schema replication check out this link:
    http://docs.sun.com/source/816-5609-10/rep.htm#1063947
    It is at the bottom of the page ...

  • Help needed in a scenario approach

    Hi all,
    Scenario: there are variable records in a fixed length file (each record type has different length of data), after extraction of the data each record has to be sent to a single IDOC. (without using BPM).
    Please help me how to approach this scenario.
    Thanks in advance.

    Hi,
    With a given information, I suggest you to proceed with general file content conversion i,e it will help u to pick entire record in each line. So after file content conversion, you see one record inside the xml tag. After this, you can split the fields if you need and map into required target fields. Totally this depends on the scenario u use.
    Btw, for this generic approach you can refer this blog:
    /people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter
    Hope this helps,
    Rgds,
    Moorthy

  • 401 - Unauthorized error in Value Mapping Replication scenario

    Hi,
    I'm trying to push some Value Mapping replication data from one of the clients (which is a non-Integration Server) of XI system to IS. When I execute the program which calls the outbound proxy, the XI message fails with the error HTTP_RESP_STATUS_CODE_NOT_OK 401 Unauthorised.
    As given in SAP help, I registered the inbound java proxies and generated the outbound proxies. I confiured the receiver channel with path /MessagingSystem/receive/JPR/XI. In authentication data, I tried several users such as XISUPER, XIAPPLUSER, XIISUSER..Still I get the same error.
    What is missing/wrong?
    Thanks in advance
    Praveen Sirupa

    Hi Praveen,
    Could you pls do the following...just for verification....
    enter the url http://<was_server>:5<sysnr>00/MessagingSystem/receive/JPR/XI
    and when it asks for authentication give the XIAPPLUSER and password...you should get an xml that looks like this...
      <?xml version="1.0" encoding="UTF-8" ?>
    - <scenario>
      <scenname>MSG_SCEN</scenname>
      <scentype>SERV</scentype>
      <sceninst>MSG_001</sceninst>
      <scenversion>001</scenversion>
    - <component>
      <compname>SERVLET</compname>
      <compdesc>Messaging System</compdesc>
      <comphost>localhost</comphost>
      <compinst>MSG_001</compinst>
    - <message>
      <messalert>OKAY</messalert>
      <messseverity>100</messseverity>
      <messarea>QR</messarea>
      <messnumber>801</messnumber>
      <messparameter>na</messparameter>
      <messtext>MessagingServlet is active.</messtext>
      </message>
      </component>
      </scenario>
    Thanks,
    Renjith

  • Replication better approach

    Hi All,
    I am doing transaction Replication of one database and want better approach and some clarification
    for example:-
    I have A database and want to replicate in two different location It create two publication for database A in Local Publication .My question is if it create two publication then it send the data two times to distributor ?
    And one more question I want to replicate same DB to two different location in US and Kolkatta and my publisher db also in Kolkata and I want one replication server in Kol and one in US  then I am using remote distributor in Kol and Now this is my approch:-
    A- Publisher in Kol
    B-Distribution in Kol
    C- Subscriber in Kol
    D- Subscriber in US
    Please let me know what i have to use pull and push Subscription and any other approch which i have to use sending data through WAN
    if any modification required then please help .
    Thanks in advance.

    I am still a little confused about what you are trying to do.
    It sounds like you have a single publisher/distributor and are replicating to 2 different subscribers. You want to make this as efficient as possible.
    Your topology is the best, because if data does not go to subscriber 2 but makes it to subscriber 1, you want it to also go to subscriber 2 when it comes on line. This store and replay method ensures that replication will be able to pick up where it left
    off and to ensure that both subscribers get the same dataset, although they might not get the same data at the same time - ie Kol being closer to the publisher will have a lower latency and be more up to date than the US server.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Error in Value Mapping Replication scenario

    Hello all,
    We have developed the scenario to upload the mass data to VMR and which is actually working as expected in Dev environment,
    But it is not working in quality after transport.
    Also we have regeister the service in JAVA proxy but still facing the issue.
    The message is failing in adapter enigne with status system error with the below error  :
    09.05.2014 10:39:36.435
    Information
    Trying to retry the message because of administrative action of user
    "XVEREKAN".
    09.05.2014 10:39:36.437
    Information
    Admin
    action: Trying to redeliver message.
    09.05.2014 10:39:36.456
    Information
    The
    message status was set to TBDL.
    09.05.2014 10:39:36.465
    Information
    Retrying
    to deliver message to the application. Retry: 4
    09.05.2014 10:39:36.465
    Information
    The
    message was successfully retrieved from the receive queue.
    09.05.2014 10:39:36.472
    Information
    The
    message status was set to DLNG.
    09.05.2014 10:39:36.498
    Information
    Java
    Proxy Runtime (JPR) accepted the message.
    09.05.2014 10:39:36.896
    Error
    JPR
    could not process the message. Reason: No such method ValueMappingReplication in
    proxy bean
    localejbs/sap.com/com.sap.xi.services/ValueMappingApplication.
    09.05.2014 10:39:36.897
    Error
    Delivering the message to the application using connection JPR failed,
    due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException:
    Error processing inbound message. Exception: No such method
    ValueMappingReplication in proxy bean
    localejbs/sap.com/com.sap.xi.services/ValueMappingApplication.
    09.05.2014 10:39:36.905
    Error
    The
    message status was set to NDLV.
    Can some one please help on this ?
    Thanks.

    Hi Amit,
    I tried that also but the same error :
    Delivering the message to the application using
    connection JPR failed, due to:
    com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error
    processing inbound message. Exception: No such method ValueMappingReplication in
    proxy bean localejbs/sap.com/com.sap.xi.services/ValueMappingApplication.
    De-Register the interface using the link :
    http://server:port/ProxyServer/unregister?ns=http://sap.com/xi/XI/System&interface=ValueMappingReplication&bean=localejbs/sap.com/com.sap.xi.services/ValueMappingApplication&method=ValueMappingReplication
    Register :
    http://server:port/ProxyServer/register?ns=http://sap.com/xi/XI/System&interface=ValueMappingReplication&bean=localejbs/sap.com/com.sap.xi.services/ValueMappingApplication&method=ValueMappingReplication
    Result
    Interface http://sap.com/xi/XI/System#ValueMappingReplication registered with
    value
    localejbs/sap.com/com.sap.xi.services/ValueMappingApplication:ValueMappingReplication
    My Final Payload after mapping
    <?xml version="1.0" encoding="UTF-8"
    ?>
    - <ns1:ValueMappingReplication xmlns:ns1="http://sap.com/xi/XI/System">
    - <Item>
    <Operation>Insert</Operation>
    <GroupID>31322371194085293873227252571012</GroupID>
    <Context>http://sap.com/xi/XI</Context>
    <Identifier scheme="Source1" agency="Excel">786123test</Identifier>
      </Item>
    </ns1:ValueMappingReplication>
    Thanks,
    Anant

  • Replication scenario (please answer)

    We have 96 table in one oracle User and we want to replicate them in bidirectional way, 4-5 tables use for logging and we don't want to replicate them.
    which scenario do you suggest to me:
    1- Use schema base capture,apply and prevent logging tables to replicate with rule and Tag ?
    If yes please tell me how can I restrict some tables in schema to don't replicate.
    2- Use table base capture,apply and write it for all necessary table (HUGE work for about 90 tables :( ).
    I prefer first one but I don't know how to exclude some tables from schema to don't capture and/or apply.

    Do a table level capture and schema level Prop and Apply. That way you filter out the tables in capture level itself and you avoid the second filter again in Prop and Apply level.

  • ORACLE DATABASE REPLICATION -- ANY APPROACH @ THANKS FOR YOUR HELP

    HI ALL,
    I am new to Oracle as a developer.Please help me if you find sometime regarding the following requirement .
    we are replicating the database (TARGET Database 9.2) with the source database(10G) .
    My Idea was to create dblinks for Source DB in Target and use some cursors to populate the data required for destination tables
    (Here i have to update the destination table when the records are updated in source DB and insert the records in Destination DB when no record is found in destination).
    For this requirement , i am planning to use ORACLE MERGE CONCEPTS.As my destination database is 9.2 and merge with 9.2 is not powerful as per my requirement .
    EG :
    1)I do not need WHEN MATCHED CLAUSE SOMETIMES )
    2)ALSO I NEED MERGE IN TO MULTIPLE DESTINATION TABLES
    I came across the following code in one of your posting .If this really works with 10g version .I think i am in the correct path (I can write stored programs in source Database (It is 10g))
    IF NOT THIS ....ANY OTHER APPROACH...
    drop table external_tbl;
    drop table dim1;
    drop table dim2;
    drop table fact_tbl;
    create table external_tbl ( dim1_cd char(1), dim2_cd char(1), qty number );
    insert into external_tbl values ( 'A', 'B', 1 );
    insert into external_tbl values ( 'B', 'C', 2 );
    insert into external_tbl values ( 'A', 'D', 3 );
    create table dim1 ( dim1_cd char(1) primary key, dim1_description varchar2(20) );
    insert into dim1 values ( 'A', 'At the Earth''s Core');
    insert into dim1 values ( 'B', 'Barsoom');
    create table dim2 ( dim2_cd char(1) primary key, dim2_description varchar2(20) );
    insert into dim2 values ( 'B', 'Buck Rogers');
    insert into dim2 values ( 'C', 'Carter, John');
    create table fact_tbl ( dim1_cd char(1) references dim1, dim2_cd char(1) references dim2, qty
    number )
    MERGE ALL
    USING(SELECT dim1_cd, dim2_cd, current_qty
    FROM external_tbl) b
    INTO fact_tbl a
    ON ( b.dim1_cd = a.dim1_cd
    AND b.dim2_cd = a.dim2_cd)
    WHEN NOT MATCHED THEN
    INSERT
    (dim1_cd, dim2_cd, current_qty)
    VALUES
    (b.dim1_cd, b.dim2_cd, b.qty)
    WHEN MATCHED THEN
    UPDATE current_qty
    SET qty = b.qty
    INTO dim1 d1
    ON (b.dim1_cd = d1.dim1_cd)
    WHEN NOT MATCHED THEN -- insert a dummy record
    INSERT INTO dim1 ( dim1_cd, dim1_description )
    VALUES (b.dim1_cd, 'unknown' )
    INTO dim2 d2
    ON (b.dim1_cd = d2.dim2_cd)
    WHEN NOT MATCHED THEN -- insert a dummy record
    INSERT INTO dim2 ( dim2_cd, dim2_description )
    VALUES (b.dim2_cd, 'unknown' )
    and the results to be:
    select * from dim2;
    D DIM2_DESCRIPTION
    B Buck Rogers
    C Carter, John
    D unknown -- added during the MERGE to maintain ref integrity
    select * from fact_tbl;
    D D QTY
    A B 1
    B C 2
    -----THANKS FOR READING MY QUESTION

    Hi ,
    Thanks for the reply....
    How can i convert 9i merge to 10g merge (update set column_name = column_name i did not get this )....
    Also , My requirement is to update only the updated columns in the source table not the entire row ..I tried like this (but not sure whether it is correct or not)
    Compare source to destination and update if not equal
    MERGE
    INTO red_cust tgt
    USING (
    SELECT --red_cust_id_seq .nextval  as CUST_ID ,
    red_cust_nos(i) as CUST_NO,
    red_cust_names(i) as CUST_NAME,
    red_cust_inactive_dates(i) as INACTIVE_DATE ,
    red_cust_insert_users(i) as INSERT_USER ,
    red_cust_insert_dates(i) as INSERT_DATE ,
    red_cust_update_users(i) as UPDATE_USER ,
    red_cust_update_dates(i) as UPDATE_DATE
    FROM master_table
    ) src
    ON ( src.cust_no= tgt.cust_no )
    WHEN NOT MATCHED
    THEN
    INSERT (tgt.CUST_ID ,
    tgt.CUST_NO,
    tgt.CUST_NAME,
    tgt.TAX_EXEMPT_INDIC,
    tgt.INACTIVE_DATE,
    tgt.INSERT_USER,
    tgt.INSERT_DATE
    VALUES (src.cust_id
    src.cust_no ,
    src.CUST_NAME,
    src.TAX_EXEMPT_INDIC,
    src.INACTIVE_DATE,
    src.INSERT_USER,
    src.INSERT_DATE
    WHEN MATCHED
    THEN
    UPDATE
    set
    tgt.cust_name =src.cust_name where tgt.cust_name !=src.cust_name ;
    /*Is this correct approach : what i mean by this is :compare src customer_name and target customer name when they are not equal assign it to the destination source name*/
    I repeated above for the other columns
    tgt.TAX_EXEMPT_INDIC,
    tgt.INACTIVE_DATE,
    tgt.INSERT_USER,
    tgt.INSERT_DATE
    After i tried this , i got the computability error again ..as i am merging from 10g to 9i .........(ERROR: Optional where clause in merge is not available in DESTINATION i.e 9i)
    If my approach above is correct , is there any alternative for where clause in merge in 9i
    I am struck here...Please guide me through the correct path.....
    I appreciate your help

  • Replication scenario

    Hello,
    I would like to use the following replication model :
    5 servers. Each in write mode. So all have to be master configured
    <---> : multi-replication
    -----> : one way replication
    M1 <--> M2
    M3 <--> M4
    M1 ----> M3
    M1 ----> M4
    M2 ----> M3
    M2 ----> M4
    M3 ----> M5
    M4 ----> M5
    Is it a possible configuration ?

    Current version of the directory server (5.2P4) does not support more than 4 masters. If you omit M5 (perhaps you can make it a consumer), then all the other combinations are possible. This is btw called a fully meshed configuration. You actaully don't need to have a full mesh in order to provide fail over but nevertheless it is entirely legal.
    Regards,
    -Wajih

  • Pricing Help - Need Complex Scenario Approach

    Hello All,
    Need your advice on the following scenario using seeded function as much as possible
    We have three basis on which we can price our customer - Kind of 3 price lists - OEM Price list, Resale price list, Customer Special rate price list and an item can be on all three lists.
    Based on the contract with customer value can be picked from either one - like
    Customer 1 + Ship From 1 + OEM1 (Item Attribute) = OEM Price - 5%
    Customer 1 + Ship From 2 + OEM2 (Item Attribute) = Resale Price + 10%
    So say 3 parts in SO, each have different price basis (price list) based on customer contract.
    The challenge is users DO NOT want to select 'SO Line Price List'. They would not know which 'Line Pricelist' to select for the item. They want price to auto populate due to high number of lines.
    1. Is there a way, that during SO creation, can the price list be auto selected based on modifier/certain conditions ?
    2. Any other way, price can be arrived at, if not using 3 price list?
    3. Can we have a 4th price list, with custom pricing attribute, which sources price from either of the 3 price lists?
    Any pointers would be of great help.
    Thanks & regards,
    Bhaskar

    Thanks for the options...
    The issue is that Business wants to store the 'Customer Contract' details somewhere - and is trying to use pricing to do that. In effect there are many possible prices, but based on the qualifing conditions, select the right amt.
    In oracle, first price list needs to be selected in line, and then modifiers are applied. Here they want first all qualifiers to run, and then select appoprate price list.
    Can we select a price list at run time based on some logic ?
    Thanks & regards
    Bhaskar

  • Unique identifier in multi-master replication scenario.

    Hi,
    I am trying to workout whether or not to use nsuniqueid as the unique identifier to pull data out of multi-master replication sun one directory server environment. From another forum I noticed that nsuniqueid can change if the tree structure changes. Is there any unique identifier that I can use which will guarantee that in every master server this unique identifier field will have the same value for the same object? I know that this is not the case for entryid values, as they are assigned different values in different master server.
    Thank you in advance,
    Johan

    Well, depending on your DIT, the usual thing to do is to use some collection of naming attributes, typically the RDN. If all else fails each DN will certainly identify each entry. You should avoid using nsuniqueid for anything in your application logic, though it is certainly unique and identical across the topology for each entry. What nsuniqueid allows you to do that other attributes don't is to differentiate two entries that are otherwise identical, but which have been added at different times. So, for instance, if you added an entry, deleted it, then readded it, the nsuniqueids would be different.
    Just make sure you don't do anything silly like putting nsuniqueid in an LDIF template. Always let the server create the nsuniqueid. And when you create a replica initialization LDIF, don't change the nsuniqueids in it.

  • Replication scenario in 5.2

    I'm currently upgrading a series of legacy Netscape 4.x ldap servers at our site. The old architecture is split between a WAN (public IP space) and LAN (10.x.x.x IP space) networks with the Master being on the WAN and various read-only replicas on the LAN. This works in 4.x due to the supported "pull" type replication agreement where the client initiates contact with the Master. Since the Master is in public IP space and a portion of the client read-only replicas are behind a firewall in a LAN, the Master cannot initiate a tcp connection with them, only vice-versa.
    Unless I am mistaken, from reading the 5.2 documentation I see that this form of replication is no longer supported, with Master initiated replication being the only option. Is this indeed the case?
    thanks,
    -jm

    Yes, master initiated replication is the only option with the 5.x directory.

  • Replication issue in ABAP to ABAP scenario

    Hello,
    I have a ABAP to ABAP replication scenario where I am replicating custom and standard tables like MDMA but found below issue.
    The replication current action is struck in "Replication (Initial Load)" with Initial load is getting done but not replicating data afterwards.
    Also, the tables is keep on switching between "Failed" and "In process" status. I checked the system is sufficient number of jobs.
    I found below error message after checking the show error log.
    I restarted the replication many times and even created the configuration but no luck.
    Please enlighten me to fix this issue...
    Regards

    Hi Tobi,
    I removed all the records from target table and replicated again but same result.
    Initial load is getting done but not replicating data afterwards. And the table is keep on switching between "Failed" and "In process" status.
    Regards

Maybe you are looking for