Replication queries

Hi Friends,
I have some queries, please help me to resolve the issues.
1) i have replicated the plant from R/3 to CRM, plant is already been replicated, but the issue is under which role the plant will be treated as, it has taken the standard internal number range groupings, which is basically been set for the employee replication, how do i replicate the plant to a particular number range and what BP role it gets replicated.
2) i have replicated the contact person from R/3 to CRM, Contact person also has been replicated using the internal standard grouping, but the contact person created in R/3 has got the Number range, where this number range been set in R/3,  can we able to create a seperate account group for contact person and use the pide settings for replication.
please do me the needful, points will be rewarded.
Regards
Felix

Hi Aporva,
Thanx for the reply, however, my question is when a customer is created in r/3 system we create a contact person, but the system generates an internal number for this contact person automatically, wherein we set number range for customer accounts not for contact person.
When i try to replicate contact person with the object customer_rel, the contact person which has the number range of 23000 series will sit in the internal standard grouping of the business partner automatically which is not set for contact person, how do i restrict this while replicating.
Secondly plant moved from r/3 to crm  also acts the same way as contact person.
how do i solve this issue.
Regards
Felix

Similar Messages

  • Investigating Deadlocks

    Hi everybody,
    I am looking into Deadlocks problem in our application. I have a trace and in this trace I can see that there are several connections. Our applications use 'Transaction Isolation Level Read Uncommitted'. Other applications use Read Committed Isolation Level.
    So, my first question - is it somehow possible to have other application to also use Read Uncommitted Level? I found one blog post where it indicated there is no way to set it server side, I want to confirm this.
    What should be my first steps in figuring out the problems?
    I included the image of the trace file I got.
    Thanks in advance.
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

    I don't know. But this link
    SQL Server 2008 R2 - Isolation Levels in the Database Engine says among other things;
    "Database Engine Isolation Levels
    The ISO standard defines the following isolation levels, all of which are supported by the SQL Server Database Engine:
    Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read)
    Read committed (Database Engine default level)
    Repeatable read
    Serializable (the highest level, where transactions are completely isolated from one another)
    Important
    DDL operations and transactions on replicated tables may fail when serializable isolation level is requested. This is because replication queries use hints that may be incompatible with serializable isolation level.
    SQL Server also supports two transaction isolation levels that use row versioning. One is a new implementation of
    read committed isolation, and one is a new transaction isolation level, snapshot.
    When the READ_COMMITTED_SNAPSHOT database option is set ON, read committed isolation uses row versioning to provide statement-level read consistency. Read operations require only SCH-S table level locks and no page or row locks. When the READ_COMMITTED_SNAPSHOT
    database option is set OFF, which is the default setting, read committed isolation behaves as it did in earlier versions of SQL Server. Both implementations meet the ANSI definition of read committed isolation.
    The snapshot isolation level uses row versioning to provide transaction-level read consistency. Read operations acquire no page or row locks; only SCH-S table locks are acquired. When reading rows modified by another transaction, they retrieve the version
    of the row that existed when the transaction started. You can only use Snapshot isolation against a database when the ALLOW_SNAPSHOT_ISOLATION database option is set ON. By default, this option is set OFF for user databases.
    La vida loca

  • Queries related to Replication concepts

    Hi,
    I have following queries w.r.t. replications
    1) What is subscriber database and whta's the purpose of this.
    2) What' the difference between the Standby database and Subscriber database.
    3) if I have a standby database then why do we need subscriber database
    4) Can i have more then one standby database.
    Regards,
    Harmeet Kaur

    The A/S pair concept goes as follows:
    - Two 'master' databases as a tightly coupled pair; at any moment one has the 'active' role and the other has the 'standby' role.
    - There can only ever be two master databases within the overall configuration
    - Updates are only allowed at the active, the standby is read only
    - Cache tables (if used) are present in both masters as actual cache groups
    - Cache autorefresh drives into the active master and is replicated to the standby, updates to AWT cache groups happen at the active and are replicated to the standby; the standby propagates them to Oracle
    - Role switch within the pair is easy and can be as a result of a failover or a managed switchover
    - The two masters must be on the same LAN or at least the same network with LAN characteristics and their system clocks must be aligned to within 250 ms
    - Replication between the masters can be asynchronous, return receipt (on request) or return twosafe (on request)
    - Subscribers are optional. You can have 0 to 128 of them.
    - Subscribers are always read only
    - Subscribers can reside on the same LAN as the masters or remotely (e.g. across a WAN)
    - Replication to the subscribers is always asynchronous
    - Cache tables (if used) are present on the subscribers as regular (non-cache) tables
    - While it is possible to convert a subscriber to a master this will mean that it is no longer part of this A/S pair setup. Thsi is usually only done in a DR scenario where a remote master is being used to instantiate a new A/S pair at the disaster recovery site.
    I hope this clarifies the use of subscribers. They are primarily used for:
    1. Read scale out (reader farm)
    2. Simple DR
    3. Oracle/AWT DR (advanced use case)
    Chris

  • ISE 1.2 CWA with Multiple PSNs - SessionID Replication / Session Expired

    Hi all.
    I have a (2) Policy Services Nodes (PSNs) in an ISE 1.2 deployment running patch 1. We are using Wireless MAB and CWA on 5760 Wireless LAN Controllers running v3.3.3.
    We are hitting an issue wherein a client first passes MAB and then gets redirected to a CWA custom portal. The client then receives a Session Expired message. This seems to be related to the fact that CWA is technically a 2-stage authentication (MAB by the WLC and then CWA by the client). Specifically, it seems to happen when the WLC makes its MAB RADIUS access-request to PSN-1 and then the client comes in to PSN-2 to complete the CWA. This issue does not happen when only one PSN is in use and all authentication traffic (both MAB RADIUS and CWA) is directed at a single PSN.
    Clients resolve the FQDN in the redirect URL using public DNS and a public DNS zone file (call it cwa-portal.example.com). cwa-portal.example.com has two A records for the two PSN nodes. DNS is responding to queries using DNS round-robin.
    I have the PSNs configured in a Node Group for session information replication between PSNs, but this doesn't seem to make a difference in behavior.
    So I ask:
    What is the recommended architecture for CWA when using more than one PSN? It seems that you would need to keep the two authentication flows pinned together so that they both hit the same PSN when using more than one PSN in a deployment. A load balancer balancing on the SessionID string comes to mind (both the RADIUS MAB request and the CWA URL contain this unique per-client SessionID), but that seems terribly overbuilt for a seemingly simple problem. On the other hand, it also seems like using a Node Group setup should easily be able to replicate client SessionIDs to all nodes in the deployment so that this isn't an issue. I.e., if the WLC authenticates MAB on PSN-1, then PSN-1 should tell the Node Group about it such that when the client CWA's on PSN-2, PSN-2 doesn't respond with a Session Expired message.
    Is there any Cisco documentation that talks about this?
    Possibly related:
    https://supportforums.cisco.com/discussion/12131531/ise-12-guest-access-session-expired
    Justin

    Tim,
    Thanks for your reply and confirming my suspicion. Hopefully a future version of ISE will provide automated SessionID synchronization among PSNs so that front-end finagling in a multi-PSN environment won't be necessary.
    For anyone else with this issue who for whatever reason can't implement a load balancer(s), I built an automated EEM applet running on a "watchdog" switch (3750 running 12.2(55)SEE9) using IPSLA tracking that senses when PSN1 is down and then
    modifies an ASA to change its client-facing NAT statement for PSN1 to PSN2
    modifies the primary and HA wireless LAN controllers to change its MAB RADIUS aaa server group to use PSN2
    reverts the ASA and WLCs to using PSN1 when PSN1 is detected up and running again
    The applet ensures the SessionID authentications stay "glued" together so that both WLCs and the client hit the same PSN for both stages of authentication. It's failover only, not a load balancing solution, but it meets our current project's need for an automated HA environment.
    PM me if you want the code. I'm have a little too much going on ATM to sanitize and post it. :)
    Justin

  • PUMP process is Running. Not generating message in report.No replication

    Hi,
    I am using GoldenGate 11g with Oracle database 11g on IBM AIX server at source and Linux at destination.
    I have created the extract process and the pump process but the pump process is not responding.
    The report of my PUMP process is as below. I can't see any run time messages getting generated in it. Even the replication is not happening.
    Please suggest how to search for the issue here and get the replication working.
    GGSCI (FIFLX595) 85> view report IUT01dp
                     Oracle GoldenGate Capture for Oracle
            Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
      AIX 5L, ppc, 64bit (optimized), Oracle 11g on Apr 22 2011 03:25:30
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
                        Starting at 2012-02-01 15:30:08
    Operating System Version:
    AIX
    Version 5, Release 3
    Node: FIFLX595
    Machine: 00C576D24C00
                             soft limit   hard limit
    Address Space Size   :    unlimited    unlimited
    Heap Size            :    unlimited    unlimited
    File Size            :    unlimited    unlimited
    CPU Time             :    unlimited    unlimited
    Process id: 16019716
    Description:
    **            Running with the following parameters                  **
    EXTRACT IUT01DP
    SETENV (ORACLE_SID = "NGPIUT")
    Set environment variable (ORACLE_SID=NGPIUT)
    uSERID ggs_owner, PASSWORD *********
    RMTHOST OFSMUG-VM-87.i-flex.com, MGRPORT 7809
    RMTTRAIL /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt
    PASSTHRU
    TABLE IUT01.*;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE:                         64K
    CACHESIZE:                                8G
    CACHEBUFFERSIZE (soft max):               4M
    CACHEPAGEOUTSIZE (normal):                4M
    PROCESS VM AVAIL FROM OS (min):          16G
    CACHESIZEMAX (strict force to disk):  13.99G
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Database Language and Character Set:
    NLS_LANG environment variable specified has invalid format, default value will be used.
    NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7ASCII.
    NLS_LANGUAGE     = "AMERICAN"
    NLS_TERRITORY    = "AMERICA"
    NLS_CHARACTERSET = "AL32UTF8"
    Warning: your NLS_LANG setting does not match database server language setting.
    Please refer to user manual for more information.
    2012-02-01 15:30:13  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).
    2012-02-01 15:30:13  INFO    OGG-01052  No recovery is required for target file /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt000000, at RBA 0 (file not opene
    d).
    2012-02-01 15:30:13  INFO    OGG-01478  Output file /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt is using format RELEASE 10.4/11.1.
    **                     Run Time Messages                             **
    ***********************************************************************Thanks !!!

    Thanks for your reply.
    The GoldenGate log is as follows :- ( the last few lines )
    2012-02-01 16:15:05  INFO    OGG-00991  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  EXTRACT IUT01DP stopped normally.
    2012-02-01 16:15:08  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start iut01dp.
    2012-02-01 16:15:08  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host 10.180.22.245 (START EXTRACT IUT01DP ).
    2012-02-01 16:15:08  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT IUT01DP starting.
    2012-02-01 16:15:08  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  EXTRACT IUT01DP starting.
    2012-02-01 16:15:09  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  EXTRACT IUT01DP started.
    2012-02-01 16:15:14  INFO    OGG-01226  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  Socket buffer size set to 27985 (flush size 27985).
    2012-02-01 16:15:14  INFO    OGG-01052  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  No recovery is required for target file /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt000000, at RBA 0 (file not opened).
    2012-02-01 16:15:14  INFO    OGG-01478  Oracle GoldenGate Capture for Oracle, iut01dp.prm:  Output file /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt is using format RELEASE 10.4/11.1.I am not sure how to check "extract read the changed data and store in to trail files? and the same check pump process read the trail files or not? "
    I can see the lt file "lt000003" getting updated regularly. which is the local trail file for extract process. The path of this file is :-
    /data01/oradata/GoldenGate/dirdat/IUT01
    The RMTTRAIL path for PUMP process is : -
    /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt ( this path is present on the remote machine ) . This file is not getting updated regularly.
    The detail report file for iut01dp is as follows :-
    GGSCI (FIFLX595) 119> info iut01dp, detail
    EXTRACT    IUT01DP   Last Started 2012-02-01 16:15   Status RUNNING
    Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
    Log Read Checkpoint  File /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000
                         2012-02-01 15:35:31.000000  RBA 0
      Target Extract Trails:
      Remote Trail Name                                Seqno        RBA     Max MB
      /oracle/GoldenGate/Setup/ggs/dirdat/IUT01/rt          0          0         10
      Extract Source                          Begin             End
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 15:35  2012-02-01 15:35
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 15:35  2012-02-01 15:35
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   2012-02-01 15:35
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 14:31  2012-02-01 14:31
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 14:31  2012-02-01 14:31
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 14:31  2012-02-01 14:31
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   2012-02-01 14:31
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  2012-02-01 12:14  2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   2012-02-01 12:14
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   First Record
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   First Record
      /data01/oradata/GoldenGate/dirdat/dpump/IUT01/lt000000  * Initialized *   First Record
    Current directory    /data01/oradata/GoldenGate
    Report file          /data01/oradata/GoldenGate/dirrpt/IUT01DP.rpt
    Parameter file       /data01/oradata/GoldenGate/dirprm/iut01dp.prm
    Checkpoint file      /data01/oradata/GoldenGate/dirchk/IUT01DP.cpe
    Process file         /data01/oradata/GoldenGate/dirpcs/IUT01DP.pce
    Stdout file          /data01/oradata/GoldenGate/dirout/IUT01DP.out
    Error log            /data01/oradata/GoldenGate/ggserr.logP.S. :- I am very new to GoldenGate, please also let me know how can i find out the answers for your queries. I hope the data provided above is sufficient for your queries , if you need anything more let me know.
    Suggest some idea to start the replication.
    Thanks.

  • Timesten replication with multiple interfaces sharing the same hostname

    Hi,
    we have in our environment two Sun T2000 nodes, running SunOS 5.10 and hosting a TT server currently in Release 7.0.5.9.0, replicated between each other.
    I would like to have some more information on the behavior of the replication w.r.t. network reliability when using two interfaces associated to the same hostname, the one used to define the replication element.
    To make an example we have our nodes sharing this common /etc/hosts elements:
    151.98.227.5 TBMAS10df2 TBMAS10df2-10 TBMAS10df2-ttrep
    151.98.226.5 TBMAS10df2 TBMAS10df2-01 TBMAS10df2-ttrep
    151.98.227.4 TBMAS9df1 TBMAS9df1-10 TBMAS9df1-ttrep
    151.98.226.4 TBMAS9df1 TBMAS9df1-01 TBMAS9df1-ttrep
    with the following element defined for replication:
    ALTER REPLICATION REPLSCHEME
    ADD ELEMENT HDF_GNP_CDPN_1 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS9df1-ttrep"
    SUBSCRIBER tt41data ON "TBMAS10df2-ttrep"
    RETURN RECEIPT BY REQUEST
    ADD ELEMENT HDF_GNP_CDPN_2 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS10df2-ttrep"
    SUBSCRIBER tt41data ON "TBMAS9df1-ttrep"
    RETURN RECEIPT BY REQUEST;
    On this subject moving from 6.0.x to 7.0.x there has been some changes I would like to better understand.
    6.0.x reported in the documentation for Unix systems:
    If a host contains multiple network interfaces (with different IP addresses),
    TimesTen replication tries to connect to the IP addresses in the same order as
    returned by the gethostbyname call. It will try to connect using the first address;
    if a connection cannot be established, it tries the remaining addresses in order
    until a connection is established.
    Now On Solaris I don't know how to let gethostbyname return more than one interface (the documention notes at this point:
    If you have multiple network interface cards (NICs), be sure that “multi
    on” is specified in the /etc/host.conf file. Otherwise, gethostbyname will not
    return multiple addresses).
    But I understand this could be valid for Linux based systems not for Solaris.
    Now if I properly understand the above, how was the 6.0.x able to realize the first interface in the list (using the same -ttrep hostname) was down and use the other, if gethostbyname was reporting only a single entry ?
    Once upgraded to 7.0.x we realized the ADD ROUTE option was added to teach TT how to use different interfaces associated to the same hostname. In our environment we did not include this clause, but still the replication was working fine regardless of which interface we were bringing down.
    My both questions in the end lead to the same doubt on which is the algorithm used by TT to reach the replicated node w.r.t. entries in the /etc/hosts.
    Looking at the nodes I can see that by default both routes are being used:
    TBMAS10df2:/-# netstat -an|grep "151.98.227."
    151.98.225.104.45312 151.98.227.4.14000 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.47307 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.48230 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.46050 151.98.227.4.14005 1049792 0 1049800 0 ESTABLISHED
    TBMAS10df2:/-# netstat -an|grep "151.98.226."
    151.98.226.5.14000 151.98.226.4.47699 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.14005 151.98.226.4.47308 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.44949 151.98.226.4.14005 1049792 0 1049800 0 ESTABLISHED
    Tried to trace with ttTraceMon but once I brought down one of the interfaces did not see any reaction on either node, if you have some info it would be really appreciated !
    Cheers,
    Mike

    Hi Chris,
    Thanks for the reply, I have few more queries on this.
    1.Using the ROUTE CLAUSE we can use multiple IPs using priority level set, so that if highest priority level set in thr ROUTE clause for the IP is not active it will fall back to the next level priority 2 set IP. But cant we use ROUTE clause to use the multiple route IPs for replication simultaneously?
    2. can we execute multiple schema for the same DSN and replication scheme but with different replication route IPs?
    for example:
    At present on my system, I have a replication scheme running for a specific DSN with stand alone Master-Subscriber mechanism, with a specific route IP through VLAN-xxx for replication.
    Now I want to create and start another replication scheme for the same DSN and replication mechanism with a different VLAN-yyy route IP to be used for replication in parallel to the existing replication scheme. without making any changes to the pre-existing replication scheme.
    for the above scenarios, will there be any specific changes respective to the different replication schema mechanism ie., Active Standby and Standalone Master Subscriber mechanism etc.,
    If so what are the steps. like how we need to change the existing schema?
    Thanks In advance.
    Naveen

  • Error During Replication of Data Source in BI 7.0 SAPKW70017

    Hi Experts
    We have recently imported Support Pack SAPKW70017 into our quality
    system. The Quality System Client setting is
    -  No changes Allowed
    -  No change to Repository and cross-client Customizing objects
    After upgrade when ever we are trying to replicate data sources we are
    getting the error. "Changes to Repository or cross-client Customization
    not permitted".
    This error was certainly not there before updating to SAPKW70017.
    Interestingly if we change client setting to
    - Automatic Recording of Changes.
    - Changes to the Repository and cross-client Customizing permitted
    and try to replicate data source. System does not prompt for any change
    request.
    To overcome this, we tried to import the 'data source in BW' via change
    request from development. The replication completes successfully but
    time stamp does not matches with source. And we end up in
    error "DataSource 0FI_AR_4 has to be replicated (time stamp, see long
    text)" in scheduler.
    Please advise.
    Thanks & Regards
    Sachchida Nand Dubey

    hi,
    I am looking forward to implement SAPKW70017, I  need to know the CONS of SP17, what was the reason which made you implement SP17, just to have updated Version of SP or did you face with any problems and decided to update the SP? what was the previous SP which you had,
    We have problems in F4 search help when running queries, SAP asked us to implement latest which is SP17, but wanted to make sure what would be the CONS before we do it, we are having SP15 in BI 7.0
    Please update the post for this.

  • Replication of Sales Order from SAP ECC to SAP CRM

    Hi,
    The scenario I am working on is breifly as under:
    Telesales will create sales order directly in ECC. The sales order created in ECC will be replicated in CRM. I have some queries and request guidance:
    1. The same sales order type will be used by ECC for creating orders from telesales, as well as regular sales orders in ECC. I need to replicate only sales orders created by telesales. Please guide how to achieve the scenario requirement.
    2. There are many 'Z' Partner function in ECC in sales order, some of them are mandatory and others are optional. During replication, if the 'Z' partner functions are not created and maintained in CRM, what will be the impact? Will the sales order replicate or no. Impact in sales order replication of mandatory partner function and optional partner function not available in CRM.
    3. Impact of 'Y' and 'Z' condition type (optional) not maintained / imported from ECC to CRM in sales orders.
    Regards,
    Rajesh

    1. In R3AC1 set filter according to your needs for object SALESORDER. There are quite a few fields on which you can set filter.
    2. Partner functions will not affest replication. Replication is more strict from CRM to ERP hen viceversa. Also you said that optinal partner functons will be missing, so there is no reason that problems should accore. But of course standard sold-to party and shi-to party should be provided for pricing reasons.
    3. Should reprsent not problem, but pricing procedures must be synchronized between both systems.

  • CRM Service: Demand replication in R/3

    Hi,
    This is a case of a firm engaged in carrying out after market service for various OEM products through 3rd party agents (Channel Partners). Replacement Parts required for Service by these Channel Partners are supplied by the parent company. I have a few queries w.r.t. replication between CRM and R/3.
    Service Order is created based on end customer request. To perform the requested service, replacement components are required. Identified components are thus specified in 'Spare Parts' tab page. Inventory is maintained in SAP R/3 in a Storage Location assigned to each Channel Partner.
    (a) In what form does 'demand for components' (identified under Spare Parts) get replicated in the respective Storage Location in R/3 so that if there is any shortfall in availability, Planning can trigger procurement proposals?
    (b) If there is an 'Alternate component' available in place of the requested one, how can we issue it? Where exactly in CRM do we maintain 'Alternate Parts' at a component level for a Product (similar to BOM maintenance in R/3)? Or,
    (c) How does BOM in R/3 get replicated in CRM Relationship along with Alternate Part grouping?
    (d) Does Service Confirmation (created as a follow-up document to Service Order) post goods issue automatically in the background?
    Thanks.
    Raj

    Hi Bobby,
    It is very much possible to upload CRM service orders to SAP R/3 or SAP ECC via standard CRM middleware.
    Please refer to the below SAP help document for complete list of configuration activities for service order upload.
    http://help.sap.com/saphelp_crm50/helpdata/en/f0/5d583c65399965e10000000a114084/frameset.htm
    <b>Do not forget to reward if it helps</b>
    Regards,
    Paul Kondaveeti

  • Replication of Pricing Conditions between ECC and CRM

    Hi All
    I have a couple of queries regarding replication of pricing information between ECC_CRM. We maintain the filter settings for Pricing procedures, access sequences and condition types in the object DNL_CUST_CND_PR, and accordingly these are replicated from ECC to CRM. My Questions:
    1. If some pricing procedure is created in CRM and this needs to be replicated to ECC, Can we achieve this?
    2. Would DELTA be active on these objects, in the sense if some changes are made to a pricing procedure in ECC, would they flow automatically to CRM?
    Thanks in advance
    Best Regards,
    Ram Sushanth

    Hi Chandra and Swaroop
    Thanks a lot for your response.
    From your replies, now, this is my understanding (Pl correct me if am wrong)
    1. Replication of Pricing Procedure from CRM to ECC is possible.
    It would be of great help if you could elaborate on the procedure to do so or provide pointers for documentation in this regard.
    Because, for replication from ECC to CRM, we perform an initial load on the object DNL_CND_CUST_PR or DNL_CUST_CNDALL.What is the object that we need to execute from replication from CRM to ECC?
    2. DELTA would be active only on condition records, but not on Condition types and Pricing Procedures and if any pricing procedure is changed in ECC, we would need to perform Initial Load again.
    Thanks in advance
    Best Regards
    Ram Sushanth

  • Replication of Z table from CRM to R/3 - No mBDoc Created

    I need to transfer the contents of a bespoke customer table from CRM into R/3, off the back of delta changes being made to the CRM table.  To help us to achieve this we have performed the following steps so far:
    1. Created the customer table in both systems.
    2. Created a new messaging BDoc in CRM and linked it to the R/3 Site Type.
    3. Created a new mapping module in CRM that takes the data from the BDoc and maps it to the BAPI structure.
    4. Created a new Adapter Object that links to my BDoc, contains the new customer table as the source table in CRM and contains the mapping Module mentioned above. 
    5. Created a new Replication Object based on my new BDoc.
    6. Created a new Publication and assigned it to the Replication Object.
    7. Created a new Subscription and assigned to the Publication and Replication Object. Also assigned it to my R/3 site.
    On the R/3 side we have created a mapping module to map the data from the BAPI structure into the equivalent R/3 table. We also have an entry in table CRMSUBTAB.
    However when I insert an entry in the customer table in CRM no BDoc is being created. In fact I cannot see anything at all in the system that indicates that it has even tried to capture the change and invoke the Middleware process.
    What am I doing wrong?
    Do I need anything else (some sort of delta program?) that picks up the parameters from the table update and feed them into my process?  The literature that I have found (and it is not much) does not mention anything like this though.
    Any help would be greatly appreciated as this is now a very urgent requirement.
    Regards
    Ian
    Edited by: IAN HAWLEY on Aug 21, 2008 9:42 AM

    Ian,
    I do expected the follow up questions. Check my explanation below and hope it will answer your queries:
    1. I assume all of the activities performed to date are still valid to supplement your solution, e.g. the BDoc, Replication Object, Publication and Subscription details?
    2. The R/3 to CRM Mapping Module. Is this required to allow messages to be sent back from R/3 to update the BDoc, e.g. a sort of validation to prove that the posting has completed ok?
    FM ZMAP_BAPIMTCS_TO_MBDOC in CRM, to map the BAPIMTCS format data and build the BDoc. This BAPIMTCS format is a temporary one and is not the final data format, that is taken to ECC. This function module also takes care of receiving the response message from ECC, once the BDoc data reaches and updates in ECC. If there is any error occured during the updation, it is captured in the error table of the BDoc and the status of BDoc is set to 'Error'. If no error occurs, the status of BDoc is set to 'Confirmed'.
    3. The Extractor Module in CRM. Does this get the data out of the table and will it work for deltas?
    Yes , It should work for Delta too.The delta load makes use of the same program and flow for Initial load (SMOF_DOWNLOAD)
    4. CRMSUBTAB in CRM. I knew that we populated this in R/3, I did not realise we would need it in CRM as I assumed it was R/3 specific.
    5. You list the sequences of FM calls at the end. I was confused in the order. As we are initiating data to be sent from CRM to R/3 should some of these be in the reverse order, e.g. ZMAP_MBDOC_TO BAPIMTCS would be called before ZMAPBAPIMTCS_TO_MBOC as would pass data into the BDoc to send it to R/3 before we then received an update message back?
    Step 1: Z_EXTRACT_MODULE  will be called.( It calles ZPICK_DATA_FROM_CRM). This function module calls the standard function module CRS_SEND_TO_SERVER the one triggers the other function modules
    Setp 2: Creat function module ZDATA_TO_BAPIMTCS(missed to mention earlier) in CRM, to map the data in the final internal table to BAPIMTCS format. This format is temporary and will be used to build the BDoc data.
    Step 3: Created function module ZMAP_BAPIMTCS_TO_MBDOC in CRM, to map the BAPIMTCS format data and build the BDoc. This BAPIMTCS format is a temporary one and is not the final data format, that is taken to ECC. This function module also takes care of receiving the response message from ECC, once the BDoc data reaches and updates in ECC. If there is any error occured during the updation, it is captured in the error table of the BDoc and the status of BDoc is set to 'Error'. If no error occurs, the status of BDoc is set to 'Confirmed'
    Step 4: Created function module ZMAP_MBDOC_TO_BAPIMTCS in CRM, to build the final BAPIMTCS structure from the BDOC. This BAPIMTCS is the final data structure that goes to ECC. The table name, objectkey, relkey that is relevant for the BAPIMTCS, is filled in this function module..
    Step 5: Created function module Z_LOAD_PROXY_FINAL in ECC, to receive the data from CRM. The BAPIMTCS data is received and mapped to local internal tables and then updates to custom tables through the function module Z_UPDATE_ECC. The errors if any are captured in this function module and returned back to CRM using the standard function module CRS_SEND_TO_SERVER.
    To reduce the load on the interface, at the final stage, it was decided to fetch the data completely in ECC. So the incoming  data from CRM is ignored and is fetched completely from ECC tables.
    6. Is there a test FM available for the extract, e.g. is CRM_SAMPLE_EXTRACT_MODULE the one to copy?
        No, You have to develop this Extractor FM say ZPICK_DATA_FROM_CRM and should be called in Z_EXTRACT_MODULE.
    Apologize for any spelling errors, as I too running to meeting.
    Update me the status.
    Bobby
    Edited by: Bobby on Aug 22, 2008 2:13 PM

  • Queries related to MAM enhancements

    Hi,
    We require more fields than what is offered by standard MAM application.
    Please let me know the procedure to do so.
    My understanding is
    Step 1 : Add the fields in BADIs so that it can be pulled using standard BAPI wrappers.
    Step 2 :Regenerate the syncbo after modifying the BAPI wrappers and edit the sync bos to download the fields to the mobile device. We would need license key for SAP to do this
    Step 3 : Add set and get functions for the newly added fields in the java files.
    I have few queries
    1. Please let me know if the above understanding is correct.
    2. After modifying the syncbos, can I add get and set functions in the impl java files without importing the xml file from syncbos?
    I am using standard MAM 2.5 app, in which the app form belongs to that of non generic syncbo access code.
    3. In the order management module, when I click on the order management link,it displays the list of orders..
    Can I change this functionality to introduce another jsp to scan the equipment.Is such a customization possible
    Regards
    Raja Sekhar

    Hi Raja,
    Question 1: Two possible reasons for I-Waiting - breakpoint in backend BAPI and it waits for action. Or one of the SyncBOs is not activated (very common reason, check merep_pd).
    Question 2: "Master data selection" is for replication. Technical objects on the device are coming from three sourses - when they are referenced on order and notif header level, when they are in order object list (and this is activated on the backend), standalone TOs configured in "user-dependent data".
    Normally, if order references FL, FL should come to the device, if this FL references Meas. Point it should come to the deice as well. If you check logs for FL you will probably see "cascading error" for the meas. point. Few things to check - this meas. point is returned by the BAPI MAM041_getlist so it is suposed to be replicated. If it is in the list - check BAPI MAM041_getdetail to see that it does not return error. No error - check DB on the middleware to see that it is indeed replicated.
    Regards,
    Larissa Limarova

  • Queries related to MAM configuration and MI synchronization

    Hi,
    I have queries as to the synchronization in MI & MAM
    1. When I am synchronizing with MI Server, the requests are going in I-Waiting work lists in  work list monitor..  What could be the reason for it.
    2. I require clarification as to exact purpose of the variants used in Master data selections of the MAM configuration. Is it to reduce the replicator database or to reduce the handheld database.
    I have created one variant for each FL, equipment and measurement points
    A. The FL variant downloads all the necessary FL to the MI server. We havent assigned the variant to the user so as download the ony the FLs related to work orders.
    B. The Equipment variant doesnt download any equipments because we are not using any equipments
    C. The measurement point variant downloads the measurement points related to the FLs mentioned in the FL variant.
    In the above scenario, we are able to download Work orders, Functional locations but not able download measurement points. In our scenario, we dont have equipments and measurement points are related to FL directly. The work list monitor displays "No data from R/3 downloaded".
    Is it due to the MAM configuration ?
    Can I debug the synchronization process in MI server so as to simulate the events happening when a synchronization is requested from MI client.
    Thanks
    Raj

    Hi Raja,
    Question 1: Two possible reasons for I-Waiting - breakpoint in backend BAPI and it waits for action. Or one of the SyncBOs is not activated (very common reason, check merep_pd).
    Question 2: "Master data selection" is for replication. Technical objects on the device are coming from three sourses - when they are referenced on order and notif header level, when they are in order object list (and this is activated on the backend), standalone TOs configured in "user-dependent data".
    Normally, if order references FL, FL should come to the device, if this FL references Meas. Point it should come to the deice as well. If you check logs for FL you will probably see "cascading error" for the meas. point. Few things to check - this meas. point is returned by the BAPI MAM041_getlist so it is suposed to be replicated. If it is in the list - check BAPI MAM041_getdetail to see that it does not return error. No error - check DB on the middleware to see that it is indeed replicated.
    Regards,
    Larissa Limarova

  • Query related to replication method supported by ORACLE in 10g...

    Hi All,
    I have few question related to ORACLE replication methods. I have two database on different machine and I want to copy at schema level from source to target database.
    for this I have few queries related to Replication method as stated below:
    1- I have two option one is Materailized view and another is Stream.
    2- if I go for Materialized view then what it's advantage in compare to Stream?
    3- if I go for Stream replication then what it's benefits?
    4- For stream replication I have read that its require to set "global_names=true", with out this can we able to setup stream replication?
    Please suggest me optimal solution.
    Thanks....

    Hello,
    4- For stream replication I have read that its require to set "global_names=true", with out this can we able to setup stream replication?Anyway, when you manage Distributed Database, it's always recommended to set global_names=true. By that way you enforce that a dblink has same name as the DB it connects to.
    Else about Stream the both (Source and Target) databases should be in Archivelog mode.
    More over, Stream Replication allow to replicate DDL change (for instance you add a column on a Table), and have much more features than Materalized Views. If you want to share datas among several databases, Stream is an interesting technology.
    Materialized view is more suitable on DSS database.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Goldengate replication performance

    Hi ,
    This is about Goldengate replication performance.
    Have configured Goldengate replication between OLTP and Reporting  and the business peak hours occurs only for a one hour.
    at that time I can see a LAG on the replicat side of around 10-15 minutes.
    Rest of all the time there is no LAG.
    I reviewed the AWR report of the target and I could see all the replicat process are executing with a elapsed time of 0.02 or 0.01 seconds.
    However I could see a major wait event DB sequential read of 65%-71% of DB time  and it is having the maximum Waits. apart from this there are no major wait event contributing to % DB time.(21% od DB CPU which I believe is normal )
    and I can also see few queries are being hit at that peak time since it is a reporting server. and they are using select query which executes more than 15-20 minutes specially on the High transaction table.
    Can you please advise where I should look on to resolve the LAG during the Peak hours.
    I believe the select operation/wait event is causing the LAG during that peak hours. Am I correct.
    can you please advise from your experience.
    Thanks
    Surendran

    Hi Bobby,
    Thanks for your response.
    Please find my response as below,
    Environment details as below.
    1. Source and target DB - oracle database enterprise edition v 11.2.0.4
    2. Goldengate version. 12.1.2.0.0  (Same Golden-gate version installed on source and target)
    3  Classic CDC process is configured between source and target Goldengate.
    Queries and response
    Is there any long running transactions on the extract side?
    No, long running transaction is seen, I can see a huge volume of transaction getting populated  (over 0.3M records in 30 minutes)
    Target environment information
    High transaction DML activities is seen only on 3 tables.
    As the target is reporting environment I can see many sql's querying those 3 high transaction populating tables and I can see DB sequential read wait event spiking up to 65%-71%.
    I can also see in the AWR report that the GG session are executing the insert/update transaction is less than 0.01/2 sec.
    Have to set the report for every 10 min. I will update to 1 min and share the report.
    My query is : Is the select operation executed on that high transaction table on the reporting server during that high transaction window is causing the bottleneck and causes the LAG during that peak hours ?
    or Do I need to look on other area's ?
    Based on above information If you any further comments/advise  please share.
    Thanks
    Surendran.

Maybe you are looking for

  • I want to connect to a network printer hp cp1518ni with different versions of windows (win7& winxp)

    I want to connect to a network printer hp cp1518ni with different versions of windows (win7 & winxp) . My pc with win 7 work ok but the pc with win xp not "see" the printer on the network. What is the additional driver should I install? thanks This q

  • Safari not displaying web text correctly...

    Under my admin account, in Safari, lots of sites displays text inaccurately, namely each letter presents as a capital A in a square box, while what I assume is background/template where text is in effect graphic displays fine... Firefox displays the

  • Adding song to playlist and duplicates in library

    Hi, Every time I add a song from my library to a playlist, that song duplicates itself in my library. Is there a way I can have the song not duplicate itself?

  • New laptop, New Ipod....Not my Itunes.. ?!?!?!?!

    ok..I have a problem getting my old Itunes .. onto my new computer. My actual itunes library with all of my music is on my mom's old laptop. and I just recently got a laptop from my brother. However.. the computer with my itunes wont turn on anymore

  • Javadoc - for other languages

    I recommended use of javadoc for documenting code in a new project; then heard the whole system will be constructed in a variety of languages, including C++ and possibly some C# (and Java ... I wrote as though that was assumed). So now I'm asked to f