Question on Replication?

Hi Friends
I know that we can replicate Material Master from ECC to SRM using object: Material . Is it possible in Std to replicate Material Master along with Text from ECC to SRM by adding the corresponding table name in CRMFILTAB and R3AC1 or does it require any enhancement.
Please let me know
Regards
Raahul

Thanks Virender
Please suggest me any Badi which perform the task of getting Basic Data text from ECC Material Master to SRM. I know we have a enhance a report but we want to go with Badi enhancement.
Regards
Raahul

Similar Messages

  • Question on replication in Oracle 10G Release 2

    Good day,
    I have a few questions on setting up replication that fits my scenario described below. Thank you in advance for reading and answering my post.
    Scenario
    I need to replicate 100-200 tables from the first OLTP server to the second DSS server that is read-only. The servers are physically located in different countries. Both servers use Oracle 10G Release 2. Required frequency of refreshes is 1-3 hours.
    Questions
    1. Is it optimal to use materialized views with fast/force refreshes for implementation of this scenario? If no, what are the better options?
    2. How do network interruptions and latency affect stability of work of replication with materialized views?
    3. How big is additional performance overhead at OLTP (source) server due to setting up replication with materialized views?

    1) I guess it depends on how you define "optimal". It's certainly a reasonable option. You might also look at Streams or even logical standby databases. There are various trade-offs involved, so it really depends on your environment.
    2) What does "stability of work of replication" mean, exactly? Obviously, if the network fails, the replication job(s) will generate errors. Depending on how you set things up, the replication process will be retried after increasing intervals until it succeeds.
    3) Maintaining materialized view logs on the OLTP system could certainly impact performance-- the logs have to be maintained synchronously with the OLTP transactions. That may or may not noticably impact OLTP transaction performance-- it's probably roughly equivalent to putting a trigger on each of the 100-200 tables. Something like Streams is designed to put less load on the source system because changes are captured asynchronously.
    Justin

  • Three questions about replication/security

    Hello,
    We are currently planning to build software for our sales persons using C#. Each sales person has a laptop and should be able to sync the client information when he/she has access to the internet/intranet. Sales person can update client information and the local database will be synced back to master server when the user is connected to the internet/intranet. My option was to go with Oracle lite (as client DB) and Oracle enterprise (Server DB). But after readying the posts in this forum, I believe Oracle XE can do the trick. Am I right?
    Second question is about the security of the replication. Sales persons can connect using the internet to sync the information back and forth. Is there a built in mechanism to secure the connection between the two DBs ( Oracle XE and EE)?
    Third question is about the recovery options. I read Mark’s post about the feature of Oracle XE. I understood that PIT recovery and achivelog mode are supported. But, the post also says that Tablespace PIT is not supported. Can some tell me the difference between PITR and TSPITR? If PITR is supported, can I restore the database to a specific date and time (i.e. Dec 2, 2005 2:00PM)?
    Thanks a lot

    Comments inline
    Hello,
    We are currently planning to build software for our sales persons using C#. Each sales person has a laptop and should be able to sync the client information when he/she has access to the internet/intranet. Sales person can update client information and the local database will be synced back to master server when the user is connected to the internet/intranet. My option was to go with Oracle lite (as client DB) and Oracle enterprise (Server DB). But after readying the posts in this forum, I believe Oracle XE can do the trick. Am I right?
    Yes - except that Oracle Lite comes with the synchronization built in, and it's tested to handle all the weird corner cases you have to deal with. XE will give you basic replication, however, you will have to build the connect, replicate (refresh materialized views), disconnect logic yourself (and test it). Personally I would spend the $100 on the Oracle Lite option
    Second question is about the security of the replication. Sales persons can connect using the internet to sync the information back and forth. Is there a built in mechanism to secure the connection between the two DBs ( Oracle XE and EE)?
    It depends by what you mean secure. When you connect XE to Enterprise Edition, it will use a database link to refresh the materialized views (replicated tables). Userids/passwords across the database link will be sent in an encrypted form. The data will not. I'm guessing you could use Oracle's Advanced Security option to secure the database links from XE to EE, but I'm not 100% sure. Tom may be able to give us a clue on this one. Also, note that DBLinks by default use the TCP/IP transport, so thats a hole you would have to kick in the firewall if the EE database was behind it (as it should be). Although replication can use HTTP as a transport mechanism
    (You can see all the issues you start to get into - the $100 dollars per Oracle Lite deployment is looking real goo to me right about now)
    Third question is about the recovery options. I read Mark’s post about the feature of Oracle XE. I understood that PIT recovery and achivelog mode are supported. But, the post also says that Tablespace PIT is not supported. Can some tell me the difference between PITR and TSPITR? If PITR is supported, can I restore the database to a specific date and time (i.e. Dec 2, 2005 2:00PM)?
    Yes - you can roll forward the entire database to a given point in time using RMAN (which will be in production). You cannot however roll forward just a subset of tablespaces (i.e a subset of the data) in XE. Tablespace PITR is an EE feature (and not for the faint hearted).
    Thanks a lot

  • Question on replication/high availability designs

    We're currently trying to work out a design for a high-availability system using Oracle 9i release 2. Having gone through some of the Oracle whitepapers, it appears that the ideal architecture involves setting up 2 RAC sites using Dataguard to synchronize the data. However, due to time and financial constraints, we are only allowed to have 2 servers for hosting the databases, which are geographically separate from each other in prevention of natural disasters. Our app servers will use JDBC pools to connect to the databases.
    Our goal is to have both databases be the mirror image of each other at any given time, and the database must be working 24/7. We do have a primary and a secondary distinction between the two, so if the primary fails, we would like the secondary database to take over the tasks as needed.
    The ability to query existing data is mission critical. The ability to write/update the database is less important, however we do need the secondary to be able to process data input/updates when primary is down for a prolonged period of time, and have the ability to synchronize back with the primary site when it is back up again.
    My question now is which replication technology should we try to implement? I've looked into both Oracle Advanced Replication and Dataguard, each seems to have its own advantages and drawbacks:
    Replication - can easily switch between the two databases using multimaster implementation, however data recovery/synchronization may be difficult in case of failure, and possibly will lose data (pending implementation). There has been a few posts in this forum that suggested that replication should not really be considered as an option for high availability, why is that?
    Dataguard - zero data loss in failover/switchover, however manual intervention is required to initiate failover/switchover. Once the primary site fails over to the standby, the standby becomes the primary until DBA manually goes back in and switch the roles. In Oracle 10g release 2, seems that automatic failover is achieved through the use of an extra observer piece. There does not seem to be anyway to do this in Oracle 9i release 2.
    Being new to the implementation of high-availability systems, I am at somewhat of a loss at this point. Both implementations seem to be a possible candidate, but we will need to sacrifice some efforts for both of them also. Would anyone shine some light on this, maybe point out my misconceptions with Advanced Replication and Dataguard, and/or suggest a better architecture/technology to use? Any input is greatly appreciated, thanks in advance.
    Sincerely,
    Peter Tung

    Hi,
    It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
    You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
    It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
    This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2).

  • Questions on replication and h/w load balancer

              Why does h/w load balancer have to support passive cookies and inspect them to
              dispatch the request to the primary server first? If we have in-memory replication
              and if h/w loadbalancer just dispatches the http request from the client to any
              of the weblogic servers in the cluster wouldnt this work?
              Is it to pin the session to the creator server to minimize the chance of replication
              misses due to n/w issues, member server slow speed, buffer overwrite etc.
              -Shiraz
              

    Yes, and previous to 6.1 (?) if the request showed up at the wrong server it
              would fail.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              Tangosol Coherence: Clustered Coherent Cache for J2EE
              Information at http://www.tangosol.com/
              "Shiraz Zaidi" <[email protected]> wrote in message
              news:3c15aa10$[email protected]..
              >
              > Why does h/w load balancer have to support passive cookies and inspect
              them to
              > dispatch the request to the primary server first? If we have in-memory
              replication
              > and if h/w loadbalancer just dispatches the http request from the client
              to any
              > of the weblogic servers in the cluster wouldnt this work?
              >
              > Is it to pin the session to the creator server to minimize the chance of
              replication
              > misses due to n/w issues, member server slow speed, buffer overwrite etc.
              >
              > -Shiraz
              

  • Some question about replication procedure?

    1.At the function __rep_process_message in the file D:\db-4.5.20\rep\rep_record,
    there are several types of messages to be handled.
    REP_LOG/REP_LOG_MORE : It indicates starting to transfer log record from
    client to master
    REP_PAGE/REP_PAGE_MORE: What are these messages used to ?
                   Does it indicate in some special case, master
    will transferdatabase file directly to client to
    accomplish replication synchronization ?
    2. When does the master start to send log records to client?
    I mend source code of ex_rep_base example provided by BDB as follow:
              ret=dbenv->txn_begin(dbenv,NULL,&txn,0);
              //step 1
              Sleep(3000);
              printf("---------------txn_begin Waken--------------------\n");
              if (ret !=0 )
                   dbenv->err(dbenv,ret,"transaction begin fail");
                   goto err;
              if ((ret = dbp->put(dbp,
              txn, &key, &data, 0)) != 0) {
                   dbp->err(dbp, ret, "DB->put");
                   goto err;
              //step 2
              Sleep(3000);
              printf("---------------Waken--------------------\n");
              ret=txn->commit(txn,0);
              //step 3
              if (ret!=0)
                   dbenv->err(dbenv,ret,"transaction commit fail");
                   goto err;
    I print message being processed at client.
    At step 1: no message is processed.
    At step 2: two REP_LOG message are processed.
    At step 3: one REP_LOG message is processed.
    Does it mean: the master don not need to wait for the put transaction commit to send log records to clients?
    And when does the master send log records to clients ?
    3. A test case is tried as follow:
         Step 1: Start Master and client;
         Step 2: Add records to master.
         Step 3: kill client and delete the database file of the client.
         Step 4: restart client, and query client's database to check if it has
    caught with master.
         (there is not write requests to master at step 4)
         Step 5: send write requests to master.
    Result: At step 4, it takes a long period of time for the client to catch
    up with master.
         But after step 5, the synchronization procedure is very fast. It takes a
    very short period of time for the client to catch up with master.
         What is the reason?
    Thanks a lot !

    1. LOG and LOG_MORE messages convey log records, usually from the master to the client. (If using client-to-client synchronization, log records may be copied from one client to another, in certain circumstances.)
    PAGE and PAGE_MORE messages convey the contents of database pages during "internal init" (which is mentioned in db-4.5.20/docs/ref/rep/mastersync.html). Again, this is usually from master to client, but could be from client to client in some circumstances.
    2. The master generally sends log records to clients as soon as the operations that generated them occur (db->put, txn->commit, env->txn_checkpoint, etc). (See also the discussion of Bulk Transfer, however: db-4.5.20/docs/ref/rep/bulk.html)
    3. The client realizes it has "caught up" with the master (STARTUPDONE event) when it can process the first "live" log record generated by the master. Thus it relies on the master doing some new write requests after the client has started synchronization.
    Alan Bram
    Oracle

  • A few questions on BDB replication

    I have a few questions on replication and will appreciate any help that I can get:
    1. In standby mode are there any issues if the existing DB files are explicitly not opened. In this scenario the standby DB host went down and the BDB application was brought up , the environment was opened with the recovery option but the DB files were not opened.
    2. What happens if a standby appliciation goes down while the synchronization is in progress i.e. the STARTUPDONE event has not been received - will the subsequent Database recovery complete (after the application has been reinitiated) ? Are there ay APIs to check if the DB is in a consistent usable state?
    3. How are the user created log entries (created by log_put) handled at the standby DB. If we use the base replication API(s) is there ay way to trap and extract the log entry before/after the rep_process_message call.
    Thanks for your help.

    Hello,
    Here are some answers to your questions.
    1. BDB does not care whether or not the application has any database files opened.
    When the standby applies transactions to a database it opens up anything it
    needs internally.
    2. There are two types of synchronization. The first is when a replica was down, and it
    is now simply a bit behind the master when it comes back up. In that situation, it is simply
    catching up to the master. If it were to crash during that time, it would again catch up to
    the master when it rebooted. The second is internal initialization where we need to copy
    over the databases, logs and run a recovery on them (internally of course). If the replica
    were to crash during this operation, the partial databases/logs that exist on reboot will
    be cleaned up automatically and the initialization would restart when communication was
    re-established.
    3. When a replica receive a log record (any log record, user-created or BDB created),
    it simply writes it into the log. Only when the replica receives a txn_commit does the
    replica call the recovery functions to apply the log records on the replica. That would be
    the time when the function for an app-specific log record would be called.
    There is no support for apps to crack open the replication messages.
    If you're using the Base API you are in control of the communication already though. If
    the master needs to send something to the clients, the application could have a different
    type of message that is app-specific and doesn't involve BDB, rep_process_message at
    all. Is that what you're trying to accomplish?
    Sue LoVerso
    Oracle

  • How can I Force/relaunch BP replication from CRM to ECC

    Hi everybody,
    We had some replication problem with sold-to-party BPs when creating them in CRM (Tr Code BP). The CRM002 id was not implemented in most cases. A job was planned to relaunch the Bdocs automatically every hour. It seemed to be ok for a moment.
    However we still have some sold-to-party BPs that are not replicated at all in ECC (Master Data and sales data). The usual procedure we set to trigger the replication is checking a case in BP Tr. but when I uncheck the case and check it again, the replication doesnt happen.
    Is there a way to force manually the replication of a sold to party BP from CRM to ECC?
    Second question in replication subject : We also have partial replication in ECC. For example the Tax classification partially replicates. It's like if the replication process in crashing on the way and the result is no CRM002 id back in the BP master data in CRM. Do you have any idea of where it's coming from?
    Is there a Transaction like SMW01 (CRM) in ECC in order to trace replication problems of processes?
    Many Thanks.
    Laurent

    Hi,
       Firstly , please check if you have any filters in for adapter object customer_main in trx. R3AC1 (which prevants data replication)
    Next, check if your inbound (SMQR) or outbound queues (SMQS) of CRM are de-registered.
    You can use trnx.
    CRMM_BUPA_SEND
    to manually replication a BP to the ECC system.
    No.. there is no monitor in ECC like the Bdoc monitor (SMW01) in CRM.
    Hope this helps. Reward if helpful!
    Thanks,
    Sudipta.

  • Replication Error with Vendor Master Records

    Hello,
    We are trying to replicate vendor master records from R/3 to SRM via BBPGETVD.  When we go to SLG1, there is an error stating Business Partner XXXX: Invalid Value 0003 for field Authorization Group.
    Currently, the AP dept. maintains the field Authorization Group in the Vendor Master Record 'Control' screen with a fixed value of 0003 (LFA1-BEGRU).  When the replication occurs, SRM complains that this field value does not exist in a SRM customizing table or is a fixed value.
    Do we need to implement the BBP_TRANSDATA_PREP BAdI to pass this value as a fixed value to replicate vendors?
    -regards
    Shaz Khan

    I'm planning to implement bbp_transdata_prep. Wondering if either one of you can help. I've also opened another thread with my specific questions.
    Replication Error with Vendor Master Records

  • OWM and replication / Spatial

    Hi all,
    - I am looking for Infos or Whitepapers which describe any restrictions regarding replication and OWM and especially using spatial data (sdo_geometry)
    - Which kind of functions/methods does replication not support using OWM.
    Thanks for any hint.
    Haitham

    Hi Haitham,
    You should look at the Workspace Manager user guide. There is an entire section devoted to replication and what is and is not supported by OWM. Also, there are no additional restrictions imposed by OWM on any spatial data during replication.
    If you have any further questions regarding replication after reading the doc, I would be glad to answer them.
    Regards,
    Ben

  • Master and tcp/ip newbie questions?

    Hi folks,
    Just a little questions about replication and tcp/ip capabilities of Berkeley DB.
    Is it possible that the master database is not on the same computer that the computer running the app?
    Is it possible there in no database on the computer that run the app, only on "servers"?
    Thanks in advance for all answer
    PS: Sorry for my bad english

    For example,
    Socket.setSoTimeout() sets SO_TIMEOUT option and I
    want to what TCP parameter this option corresponds in
    the underlying TCP connection.This doesn't correspond to anything in the connection, it is an attribute of the API.
    The same questions
    arises fro other options from SocketOptions class.setTcpNoDelay() controls the Nagle algorithm. set{Send,Receive}BufferSize() controls the local socket buffers.
    Most of this is quite adequately described in the javadoc actually.

  • Can you use streams replication to replicate an advanced queue?

    We need to be able to support failover of an Advanced Queue between a primary database instance and one or more alternate instances. To insure consitancy across the multiple-database instances, every enqueued and dequeued message must be replicated in the event of a failure on the primary node. TAF is used to automatically failover the App to an alternate database instance. Without getting into the details, Data Guard/Standby/RAC are NOT options.
    Questions:
    Is replication of an Advanced Queue supported via Streams Replication?
    Are there guidlines/recommendations on how this should be done/setup?

    No. AQ is not supported by Oracle Streams. User defined and Sys.AnyData are not supported types.
    You can create AQ propagation process from source to backup site. But you will need to dequeue both sites simultaniously.
    Or you can create schadow table (

  • Memory issue on replica client

    I am using bdb 4.7.25 on freebsd 7.0 C++ api.
    I have applied patch from Link: Re: Question on replication error like "DB_ENV->rep_process_message: DB_NOTF.." to fix log_archive issue. I have also applied the patch suggested in the reply to above message.
    On master node, I am doing lot of write operation with periodic checkpointing.
    Case 1:
    =======
    Later, when master node archives (deletes) log files after checkpointing, after few minutes of transaction, I get following error on client node.
    Log sequence error: page LSN 0 0; previous LSN 25 1048356
    Recovery function for LSN 26 4263441 failed on forward pass
    Client initialization failed. Need to manually restore client
    PANIC: Invalid argument
    DB_ENV->rep_process_message: DB_RUNRECOVERY: Fatal error, run database recovery
    message thread failed: DB_RUNRECOVERY: Fatal error, run database recovery
    PANIC: fatal region error detected; run recovery
    DB_ENV->rep_process_message: DB_RUNRECOVERY: Fatal error, run database recovery
    message thread failed: DB_RUNRECOVERY: Fatal error, run database recovery
    PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
    PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
    Please advice, what could be possibly wrong and how can I fix it?
    Case 2:
    =====
    On similar instances, when I dont do log_archive'ing on master node to delete the log file, the memory footprint of client process periodically increases a lot and then decreases back to normal. I suspect this happens around the checkpointing, where master sends burst of messages to client to replicate. But gradually the footprint increases too high and starts using swap space and there is not enough memory to allocate. Is this fluctuation of memory footprint on client node an expected behaviour?
    Potentially following output for db_stat-4.7 -MA might help.
    This is the statistics from the replica client node machine.
    Mpool REGINFO information:
    Mpool Region type
    3 Region ID
    __db.003 Region name
    0x28710000 Original region address
    0x28710000 Region address
    0x287100c0 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 4094 allocations, 12894388 failures, 4007 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 34
    2KB 1
    4KB 0
    8KB 12898447
    16KB 0
    32KB 0
    64KB 0
    128KB 0
    256KB 0
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    MPOOL structure:
    9 MPOOL region mutex 2 / 26M 0% 11489 / 674238720
    401 / 2533580 Maximum checkpoint LSN
    37 Hash table entries
    11 Hash table last-checked
    496749207 Hash table LRU count
    497385622 Put counter
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Please help me resolve both cases.
    Regards,
    Sury

    Thanks very much! It's long, Is there an rignt <font face="tahoma,verdana,sans-serif" size="1" color="#000">answer</font> for the problem?

  • Group Accounts in MDG-F

    What is correct way to handle Group account in MDG-F in MDG 6.0? We have an operating CoA and Group CoA, if I try to create an account in operating CoA, The MDG chnage request processing requires a value for Group Account. I created a account in Group CoA and tries referencing in operating CoA but it doesnt work as in data model this group account value is being referenced by FS Item entity type.
    Should Group CoA be managed as FS  Item? Is there any other way than using FS Item. It seems that if I use FS Item it will require FSI, FSIT & FSIH entity types to be added to edition type as well as chnage request.
    Is it possible to replicate FS Item to Group CoA in ECC?
    Any expereince in handling such situation?

    Hello Zorawar
    The process of creating group account in MDG is different. Since the group account is mapped in GL, you have use FSI entity type for creating group account. Then you can use the same group account and map it in operating GL - ACCOUNT entity type.
    Now when you create a group account as FSI entity type, there are various fields which are really not required. Inform your business about it. You have to add FSI / FSIT and H in your entity type / edition as it is must but don't fill up values. You may get error but just proceed further. Ignore this error.
    Regarding your question on replication - FSI won't get replicated as group account. So what we did is, we have created FSI as well as ACCOUNT for the same group account. Create both in one change request. Your account will get replicated and not FSI.
    Try this approach and let me know. As this worked in our case since there was no option. You have to create the same account 2 times using FSI and ACCOUNT entity type,
    Kiran

  • Replicating the datasource

    Hello SAP Gurus,
    I have a question on Replication of Datasources.
    If I create a new datasource in R/3 Development system and replciate the datasource in BW
    and create an Infosource and collect the objects of BW (infosource Communication structure ,Transfer structure, Cube) and R/3 (Data source).
    and I imported R/3 datasource to Next environment (QA) ....and without replicating the datasource in BW.
    I want to transport the BW objects without replicating the datasource.
    Is it Possible?
    Thanks.

    Super Man,
       what ever it may be the verrsion you need to replicate the DS. when ever you make any chnages(any technical changes to the DS i.e. adding new fields, changing the technical attributes of the field etc) or create new DS or install BC DS, you need to replicate in BW. When ever you replicate the DS, it copy logical view of DS in SAP BW(Means Meta data will Transfered). without replicating it's not pissible setup data flow.
    All the best.
    Regards,
    Nagesh Ganisetti.

Maybe you are looking for