InMage VMware replication problems

I am testing InMage to replicate VMware VMs to another site. I have set up a protection plan in vContinuum for a single VM and seen it successfully create the replica VM in the secondary site's vCenter. When I attempt to power it on it gives me the following
error.
"An error was received from the ESX host while powering on VM xxxxx
Cannot open the disk '/vmfs/volumes/542d234c-1b6719d0-cbc3-0025b5010a1a/xxxxx/xxxxx.vmdk' or one of the snapshot disks it depends on.
Failed to lock the file"
I'm not sure if file locking of the VMDKs is expected and if I need to stop the replication process before I can power on the replica. Is this necessary? How would I do this?
I have also tried controlling the recovery to the secondary site in vContinuum but when I choose the Recover option and run a readiness check I get an error: "A common consistency tag is not available to perform recovery operation. Use Latest Time option
to recover".
When I choose the latest time option it says "This VM can't be recovered. Make sure replications are in progress and volumes are in data mode. Contact customer support for further assistance"
The replication appeared to have worked with no errors. The Monitor option shows all green ticks for the protection plan but I cannot use it. What am I missing?

A CAS and eight primaries: just being curious: how many clients are you managing in total?
See
http://blogs.technet.com/b/umairkhan/archive/2014/02/18/configmgr-2012-data-replication-service-drs-unleashed.aspx and
http://blogs.technet.com/b/umairkhan/archive/2014/03/25/configmgr-2012-drs-troubleshooting-faqs.aspx for troubleshooting.
Torsten Meringer | http://www.mssccmfaq.de

Similar Messages

  • How can i solve this replication problem between TT and ORACLE

    Hi
    I have an application that using AWT cashgroup implement the replication between TT (7.0.6.7) and ORACLE(10g);
    but i encounter this problem:
    16:16:50.01 Err : REP: 2302682: ABM_BAL_WH:meta.c(4588): TT5259: Failed to store Awt runtime information for datastore /abm_wh/abm_bal_ttdata/abm_bal_wh on Oracle.
    16:16:50.02 Err : REP: 2302682: ABM_BAL_WH:meta.c(4588): TT5107: TT5107: Oracle(OCI) error in OCIStmtExecute(): ORA-08177: can't serialize access for this transaction rc = -1 -- file "bdbStmt.c", lineno 3726, procedure "ttBDbStmtExecute()"
    16:16:50.02 Err : REP: 2302682: ABM_BAL_WH:receiver.c(5612): TT16187: Transaction 1316077016/357692526; Error: transient 0, permanent 1
    the isolation level of my date store is read-committed ,and the sys.ODBC.INI file is also set Isolation=1(readcommitted mode)
    so ,I still wonder how the error ORA-08177!
    how can i solve this replication problem?
    thank you.

    I suspect this is failing on an UPDATE to the tt_03_reppeers table on Oracle. I would guess the TT repagent has to temporarily use serializable isolation when updating this table. Do you have any other datastores with AWT cachegroups propagating into the same Oracle database? Or can you identify if some other process is preventing the repagent from using serializable isolation? If you google ORA-08177 there seem to be ways out there to narrow down what's causing the contention.

  • Session in-memory replication problem

    Hi,
              I am running into some cluster HttpSession replication problems. Here is
              the scenario where replication fails (all servers mentioned here are a
              part of a cluster).
              1a - 2 Weblogic servers (A&B) are running - no users logged in,
              2a - user logs in and a new session in server A is created.
              3a - after several interactions, server A is killed.
              4a - after user makes susequent request, Weblogic correctly fails over
              to server B
              Problem: Not entire session data is replicated. The authentication info
              seems to
              be replicated correctly but there are some collections in the session of
              server A
              that did not make it to the session in server B.
              The interesting part is this: If there is only one server A running to
              begin with and a user
              interacts with it for a while and only then server B is started, when
              after server B starts up
              server A dies - the entire session (which is exactly the same as in the
              failing scenario) is
              corretly replicated in B, including collections that were missing in the
              failing scenario.
              How can this be possible ????
              Thanks for any info on this one - it really puzzles me.
              Andrew
              

    Yes, you are on the right track. Everytime you modify the object you should call
              putValue. We will make it more clear in the docs.
              - Prasad
              Andrzej Porebski wrote:
              > Everything is Serilizable. I get no exceptions. I did however read some old
              > posts regarding
              > session replication and I hope I found an answer. It basically seems to boil
              > down to what
              > triggers session sync-up between servers. In my case , I store an object into
              > session and
              > later on manipulate that object directly wihotu session involvment and the
              > results of those manipulations
              > are not replicated - no wonder if HttpSession's putValue method is the only
              > trigger.
              > Am i on the right track here?
              >
              > -Andrew
              >
              > Prasad Peddada wrote:
              >
              > > Do you have non serializable data by any chance?
              > >
              > > - Prasad
              > >
              > > Andrzej Porebski wrote:
              > >
              > > > Hi,
              > > > I am running into some cluster HttpSession replication problems. Here is
              > > > the scenario where replication fails (all servers mentioned here are a
              > > > part of a cluster).
              > > > 1a - 2 Weblogic servers (A&B) are running - no users logged in,
              > > > 2a - user logs in and a new session in server A is created.
              > > > 3a - after several interactions, server A is killed.
              > > > 4a - after user makes susequent request, Weblogic correctly fails over
              > > > to server B
              > > >
              > > > Problem: Not entire session data is replicated. The authentication info
              > > > seems to
              > > > be replicated correctly but there are some collections in the session of
              > > > server A
              > > > that did not make it to the session in server B.
              > > >
              > > > The interesting part is this: If there is only one server A running to
              > > > begin with and a user
              > > > interacts with it for a while and only then server B is started, when
              > > > after server B starts up
              > > > server A dies - the entire session (which is exactly the same as in the
              > > > failing scenario) is
              > > > corretly replicated in B, including collections that were missing in the
              > > > failing scenario.
              > > >
              > > > How can this be possible ????
              > > >
              > > > Thanks for any info on this one - it really puzzles me.
              > > >
              > > > Andrew
              > >
              > > --
              > > Cheers
              > >
              > > - Prasad
              >
              > --
              > -------------------------------------------------------------
              > Andrzej Porebski
              > Sailfish Systems, Ltd. Phone 1 + (212) 607-3061
              > 44 Wall Street, 17th floor Fax: 1 + (212) 607-3075
              > New York, NY 10005
              > -------------------------------------------------------------
              

  • DFS Replication Problem

    Hi Friends,
    I have windows server 2003 domain at two location before somw month back its replication data
    and its working fine but now i unable to see replicate data i mean i thing having replication problem
    i gotted some evint id error on server
    Event id error :- 5002  , 4202  , 1925 ,13568
    Please help me .
    Thanks,
    Madhukar

    The 4202 is staging quota size is too small.  
    Run these 2 Power Shell commands to determine the correct Staging Quota size:
    $big32 = Get-ChildItem DriveLetter:\FolderName -recurse | Sort-Object length -descending | select-object -first 32 | measure-object -property length –sum
    $big32.sum /1gb
    Take that resulting number, round it up to the nearest whole integer, mulitply that times 1024 and enter that number on the Staging tab of the Properties of a replicated folder in DFS Mgt.
    More info here:
    http://blogs.technet.com/b/askds/archive/2007/10/05/top-10-common-causes-of-slow-replication-with-dfsr.aspx
    Run this command to tell you the status of Replication:
    wmic /namespace:\\root\microsoftdfs path DfsrReplicatedFolderInfo get replicatedFolderName, State
    0: Uninitialized
    1: Initialized
    2: Initial Sync
    3: Auto Recovery
    4: Normal
    5: In Error
    Let us know how that goes.

  • In memory replication problems when I bring up a new server

              I've got in memory replication set up for 6.1. It works fine if I have 2 servers
              up and 1 goes down.
              However, if I have 1 server up and a bring a second server up, the sessions blow
              out.
              E.g. I've got server A and server B.
              Both are up, both have sessions. As new sessions come in, they are replicated over
              to the other server.
              now I bring server B down. All sessions on B fail over to A.
              so far so good.
              However when I bring server A back up some of the sessions fail as soon as the server
              is back up.
              Is this a configuration issue, is this a know problem?
              This worked fine in weblogic 5.1. In 5.1 when I brought an instance back up, everything
              worked fine.
              

              It turns out the problem was caused by using an old version of the Apache Plugin.
              This problem occurred while using the 5.1 apache plugin with WLS 6.1.
              Once we realized we were using the wrong plugin and swithced to the 6.1 plugin, the
              problem went away.
              

  • JNDI replication problems in WebLogic cluster.

    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.

    It is not a good idea to use JNDI to replicate application data. Did you consider
    using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
    there is an example of distibuted hashtable in examples.
    Alex Rogozinsky <[email protected]> wrote:
    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.--
    Dimitri

  • VMware recording problem

    A company I work with has problems running Captivate in a VMware Player. The VMware Player is version 3.1.1 and it is running on Windows 7 64bit machine. Captivate is installed in both the VMware Player and on the regular PC.
    The problem is with screen capturing. When doing screen capture (of Microsoft Office programs) Captivate automatically switches to Full Motion Recording even though the setting is set to Automatic Recording. It is as if Captivate thinks it needs to switch to FMR. This happens in most cases right away when starting a recording. When doing the same recording in the regular pc window there are no problems, but in the VMware Player there is.
    Does anyone have experience with Captivate screen capturing in a virtual machine and perhaps experienced similar problems?

    Hi Manish,
    It is Captivate 5.
    Best regds

  • Apache + 2 Tomcats session replication problem.

    Greetings everyone.
    Before stating the problem, let me explain how my environment is set.
    I have two machines. One (PC1) running Apache (HTTP server 2.0.58)
    and one instance of Tomcat (5.0.28) and another machine (PC2) with
    another instance of Tomcat(5.0.28).
    The Apache server
    It is configured to handle static content, to redirect dynamic content to a
    Tomcat instance through AJP 1.3 connector.
    This process is done through the mod_jk and the workers.properties
    The workers.properties file is configured to have sticky_session = True
    so it assigns a SESSION_ID to the same Tomcat it was first assigned.
    The workers.properties file is configured to have
    sticky_session_force = True so if the Tomcat the SESSION_ID was
    assigned is not available, the server answers with a 500 error.
    The Tomcat servers
    Both have only the AJP 1.3 connector enabled
    Both have the Cluster tag from the server.xml file uncommented
    and the useDirtyFlag flag set to false, for not to allow SESSION
    replication between Tomcats.
    The workers.properties file
    workers.apache_log=C:/Apache2/logs
    workers.tomcat_home=C:/Tomcat5
    workers.java_home=C:/j2sdk1.4.2_13
    ps=/
    #Defining workers -----------------------------
    worker.list=balancer,jkstatus
    #Defining balancer ---------------------------
    worker.balancer.type=lb
    worker.balancer.balance_workers=tel1, tel2
    worker.balancer.sticky_session=True
    worker.balancer.sticky_session_force=True
    worker.balancer.method=B
    worker.balancer.lock=O
    #Defining status -----------------------------
    worker.jkstatus.type=status
    worker.jkstatus.css=/jk_status/StatusCSS.css
    #Workers properties ---------------------------
    worker.tel1.type=ajp13
    worker.tel1.port=8009
    worker.tel1.host=127.0.0.1
    worker.tel1.lbfactor=1
    worker.tel1.socket_keepalive=False
    worker.tel1.socket_timeout=30
    worker.tel1.retries=20
    worker.tel1.connection_pool_timeout = 20
    #worker.tel1.redirect=tel2
    worker.tel1.disabled=False
    worker.tel2.type=ajp13
    worker.tel2.port=8009
    worker.tel2.host=199.147.52.181
    worker.tel2.lbfactor=1
    worker.tel2.socket_keepalive=False
    worker.tel2.socket_timeout=30
    worker.tel2.retries=20
    worker.tel2.connection_pool_timeout = 20
    #worker.tel2.redirect=tel1
    worker.tel2.disabled=False
    THE PROBLEM
    I open a browser in the jk-status page to see how the Tomcat instances are
    working, and both are working fine: Stat -> OK, now as the
    loadbalancing factor is 1 on both Tomcats, an even alternating session
    distribution is set.
    While this browser is open to keep an eye on the status, I open a new
    browser (B1)to connect to my Web Application, Apache answers
    correctly and gives me a SESSION_ID for Tomcat instance 1 [both
    instances are OK], if I make a simple refresh, my SESSION_ID is still the
    same so I'm assigned to Tomcat instance 1 but this time I get an
    ERROR 503 - Service unavailable but looking at the status of the
    Tomcat instances both instances are still OK, no-one is down. And it
    stays throwing this error for as many refreshes i do.
    Now, I open a new browser (B2)and do the same process as before,
    as expected, Apache now gives me a SESSION_ID for Tomcat instance 2,
    repeating the same refreshing process, the error is thrown again, but still at
    the jk-status page, both instances are fine.
    Without closing these windows, I make a new refresh try on B1 and
    even though the jk-status says both Tomcat instances are OK, the error
    is still thrown. I open a third one (B3), and Apache again, correctly
    gives me a new SESSION_ID for Tomcat instance 1 and answers
    correctly on the first call. But once again if i repeat the refreshing process, the
    error is thrown again.
    Note: Using a different resolution to always keep and eye on the
    instances status and using a refresh rate of 1 second for status, both
    servers always were OK.
    So the main problem is that somehow when the session is replicated
    to the same tomcat, Apache confuses and thinks it is not available, when
    asking it through the jk-status it tells it is OK
    I've been trying different configurations with both Apache and Tomcat,
    but there must be something missing since I don't get it to work correctly
    Thanks in advance for all your helping comments.
    - @alphazygma

    Whew... that was quite an answer... definitely is going to help him a lot. Yeah any n00b by now should know how to use google, but that's not the point in this forums, here we are to help each other. and wether you like it or not many of us deploy applications to tomcat and stumble on this. So dont try to be cool posting this kind of answers like google this or google that if you dont have an answer please dont comment you will appear to be more noobish than you aparently are.
    Well enough talking.
    I found the following useful: (it comes in the server.xml of the tomcat configuration)
    <!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
    <Engine name="Standalone" defaultHost="localhost" debug="0" jvmRoute="jvm1">
    -->
    Enabling that entry on both machines should be enough.
    Aparently the problem is not with apache. is with tomcat since it can't retain the session apache gives.
    more information in the Tomcat help at:
    http://tomcat.apache.org/tomcat-5.0-doc/balancer-howto.html#Using%20Apache%202%20with%20mod_proxy%20and%20mod_rewrite

  • CRM-BW replication problem.

    Hi friends,
    I have edited existing Standard CRM extractor with all requried steps in CRM Dev system
    Later i have activated in RSA6 in CRM Dev system and pressed Data source Transport button and further saved in CRM Dev system.
    Later in BW Dev system i have created new Infosource relevent to this extractor, but It is showing old CRM extratcor without any changes.
    Please let me know how can we make all new changes replication to BW Dev system.
    I have also, performed replication in BW  using RSA1 and also using a programe but No use.
    Is i am missing any step after RSA6 in CRM?
    Plz suggest.
    Thanks.

    HI,
       Symptom
    Inconsistency in the assignment of the info-object for the Territory extraction in One order datasources
    If the one order Infosources are activated then the mapping of the Datasource fields TERR_GUID and PATH_GUID are assigned to the same infoobject 0CRM_TR. Therefore, activation of the infosource causes an error.
    Other terms
    Sales Analytics, Service Analytics, One order data Extraction, Territory extraction
    Reason and Prerequisites
    PATH_GUID field of the Datasource is not marked as field hidden in BW
    Solution
    Please note that this correction is available from the following CRM Releases
    CRM 4.0 SP08
    The correction is relevant for the following datasources
    0CRM_LEAD_H
    0CRM_LEAD_I
    0CRM_OPPT_ATTR
    0CRM_OPPT_H
    0CRM_OPPT_I
    0CRM_QUOTATION_I
    <b>0CRM_QUOTA_ORDER_I</b>
    0CRM_SALES_CONTR_I
    0CRM_SALES_ORDER_I
    0CRM_SRV_CONFIRM_H
    0CRM_SRV_CONFIRM_I
    0CRM_SRV_CONTRACT_H
    0CRM_SRV_PROCESS_H
    0CRM_SRV_PROCESS_I
    In order to have this functionality available, please re-activate the datasource in the OLTP (CRM system) and replicate the datasource in the BW systems
    The correction is the following:
    The Datasource had the fields Territory Guid and Path Guid. The path guid should be marked as Field hidden in BW.
    Or in other words in the datasource maintenance in the transaction BWA1, for the extract structure tab, the field PATH_GUID should have the value for Selection as 'A'.
    If you still have any problems, do let me know.
    regards,
    ravi

  • Tablespace Replication Problem - high disk I/O

    Hi.
    I'm doing some R&D on Oracle Streams. Have setup Tablespace replication between 2 10g R2 instances. data seems to replicate between the 2. These 2 instances have no applications running off of them apart from OEM and queries I run using SQL Developer and SQL*PLUS.
    The problem i'm seeing is that since setting up and switching on this replication config disk I/O is high.I'm using windows Performance Monitor to look at
    - % Disk time = 100%
    - Avg Disk Writes/sec = 20
    - Avg Disk Reads/sec = 30
    - CPU % = 1
    - % Commited Mem = 40%
    To me this just looks/sounds wrong.
    This has been like this for about 24hrs.
    OEM ADDM report says "Investigate the cause for high "Streams capture: waiting for archive log" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation." I haven't found any reference to this anywhere.
    Anybody got any ideas on how to track this one down? Where in my db's can I look for more info?
    Platform details:
    (P4, 1GB RAM, IDE disk) x 2
    Windows Server 2003 x64 SP1
    Oracle 10.2.0.1 Enterprise x64
    Script used to setup replication:
    set echo on;
    connect streamadmin/xxxx;
    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => '"STM_QT"',
        queue_name  => '"STM_Q"',
        queue_user  => '"STREAMADMIN"');
    END;
    --connect streamadmin/xxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oratest2.xxxx.co.uk)(PORT=1521)))(CONNECT_DATA=(SID=oratest2.xxxx.co.uk)(server=DEDICATED)));
    connect streamadmin/xxxx@oratest2;
    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => '"STM_QT"',
        queue_name  => '"STM_Q"',
        queue_user  => '"STREAMADMIN"');
    END;
    connect streamadmin/xxxx;
    create or replace directory "EMSTRMTBLESPCEDIR_0" AS 'D:\ORACLE\DATA\ORATEST1';
    DECLARE 
        t_names DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN   
    t_names(1) := '"ADFERO_TS"';
    DBMS_STREAMS_ADM.MAINTAIN_TABLESPACES(
       tablespace_names             => t_names,
       source_directory_object       => '"DPUMP_DIR"',
       destination_directory_object  => '"DPUMP_DIR"',
       destination_database          => 'ORATEST2.xxxx.CO.UK' ,
       setup_streams                 => true,
       script_name                    => 'Strm_100407_1172767271909.sql',
       script_directory_object        => '"DPUMP_DIR"',
       dump_file_name                 => 'Strm_100407_1172767271909.dmp',
       capture_name                  => '"STM_CAP"',
       propagation_name              => '"STM_PROP"',
       apply_name                    => '"STM_APLY"',
       source_queue_name             => '"STREAMADMIN"."STM_Q"',
       destination_queue_name        => '"STREAMADMIN"."STM_Q"',
       log_file                      => 'Strm_100407_1172767271909.log',
       bi_directional                => true);
    END;
    /

    ok dont know why this didn't work before but here are the results.
    select segment_name, bytes from dba_segments where owner='SYSTEM' and segment_name like 'LOGMNR%' ORDER BY bytes desc
    SEGMENT_NAME                                                                      BYTES                 
    LOGMNR_RESTART_CKPT$                                                              14680064              
    LOGMNR_OBJ$                                                                       5242880               
    LOGMNR_COL$                                                                       4194304               
    LOGMNR_I2COL$                                                                     3145728               
    LOGMNR_I1OBJ$                                                                     2097152               
    LOGMNR_I1COL$                                                                     2097152               
    LOGMNR_RESTART_CKPT$_PK                                                           2097152               
    LOGMNR_ATTRIBUTE$                                                                 655360                
    LOGMNRC_GTCS                                                                      262144                
    LOGMNR_I1CCOL$                                                                    262144                
    LOGMNR_CCOL$                                                                      262144                
    LOGMNR_CDEF$                                                                      262144  
    LOGMNR_USER$                                                                      65536  
    160 rows selected
    select segment_name, bytes from dba_extents where segment_name=upper( 'logmnr_restart_ckpt$' );
    SEGMENT_NAME                                                                      BYTES                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    29 rows selectedMessage was edited by:
    JA
    Message was edited by:
    JA

  • CRM 5.0 & IS-U EHP4 Replication Problem

    Hi all,
    in the context of an upgrade of our IS-U (ECC 6.0) to EhP4 we experience an issue during the replication of contracts (Object SI_CONTRACT) from IS-U into CRM (CRM 5.0). As we didn't find any trace of a similar issue somewhere (Google, SAP Notes) I'll breifly describe the issues as well as our solution blow.
    After the upgrade the contract BDocs sent from IS-U to CRM (e.g. after performing a move out for a BP) would appear to be processed successfully (green icon in SMW01). However when analysing the BDocs in detail we noticed that they contained no data. Strangely enough, request load for contracts from CRM still worked seamlessly. After some debugging we identified that the issue was that the BDocs (more precisely the BAPIMTCS structure) sent from IS-U contained structure names that where not expected from the mapping module in CRM. The underlying reason was that in table TBE31 the entry for the event IBSSICON had been changed from EECRM_CONTRACT_COLLECT_DATA to ECRM_CONTRACT_COLLECT_DATA.
    This table is read in the function module EECRM_DELTA_DOWNLOAD_IBSSICONT. The entry for the event IBSSICON determines which function modules are used to collect the contract data in IS-U and also which function modules are used to preform the mapping to the BAPIMTCS structures.
    Changing the entry back to the initial contents solved our problem. After the change the BDocs where filled and processed correctly. This fix seems to be necessary for all CRM version < 5.2.
    Christian
    Edited by: Christian Drumm on Sep 29, 2010 9:00 AM
    Included information on CRM 5.2

    Hi Gobi,
    Thank you for advice. But:
    I've created fields not by using AET - I used documentation that Nicolas suggested me above.
    I've enhanced tables and structures that were mentioned there with my z-fields for both sides (CRM and ERP).
    Also I looked CRMC_BUT_CALL_FU for CRM Inbound BUAG_MAIN - corresponding standard FMs are marked for call.
    Any ideas?
    Thanks in advance.
    BR,
    Evgenia

  • 6140 Replication Problem in Sun Cluster

    Hi,
    I am not able to mount a replicate volume from cluster system (primary site) to non-cluster system (DR site). Replication was done by 6140 storage. In primary site the volume was configured in a system with metaset under Solaris Cluster 3.2. and in DR site it was mapped in a non-cluster system after suspending the replication.
    I even tried to mount the volume in DR site (non-cluster system) by creating a metaset and putting the volume under this and mount it from there. But this action also not working.
    Following are the log of the errors:
    drserver # mount -F ufs /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 /mnt/
    mount: /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 is not this fstype
    drserver #
    drserver #
    drserver #
    drserver #
    drserver # fstyp -v /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0
    Unknown_fstyp (no matches)
    drserver #
    I will be grateful if you have any workaround for this. Please note that, the replication from the non-cluster system is working fine. Only from the cluster system it is not working and showing above errors.

    I am not sure how you can run Solaris 10 Update 8, since to my knowledge that is not released.
    What is available is Solaris 10 05/09, which would be Update 7.
    You are not describing what exact problem you have (like specific error messages you see), or what exactly you did to end up in the situation you have.
    I would recommend to open a support case to get a more structured analysis of your problem.
    Regards
    Thorsten

  • BP Replication Problem v2  (CRM 5 - ECC 6)

    Hi¡¡¡¡
    I’m trying to replicate BP between CRM 5 ->> ECC 6, but not successful. I just get replicating BP’s but is not  creating a customer in the ECC system.
    I have maintained all steps mentioned in building block - C03 (CRM Master and Transaction Data Replication). I have maintained separate account group approach.I have defined an account group Z001 for customers that are created in the CRM system and have to be replicated to the ECC system (external number assignment in the receiving system).I made sure that number ranges are in sync in both the systems. Particularly in PIDE, mapping classification Customer 'B'  and Consumer “E” to 'Z001'.
    In the SMW01, the BDoc is green.
    Please help me to resolve this problem. I am in real urgency.
    Thank’s
    Jose Alvarez
    [email protected]
    Santiago, Chile

    Laercio,
    Your business partner grouping in CRM is setup wrong.  The ZLEV that corresponds to the R/3 CLEV should actually be setup as an external number range in CRM.
    The way this working is the following:
    If the number originates in R/3 then CRM must use an external number range
    If the number originates in CRM then R/3 must use an external number range.
    So your CLEV account group should map to a CLEV partner grouping in CRM and the ZLEV must map to ZLEV grouping in R/3.
    The CLEV in R/3 will be internal numbering
    The ZLEV in R/3 will be external numbering
    The CLEV in CRM will be external numbering
    The ZLEV in R/3 will be external numbering.
    I believe if you do this, then it should allow you to keep the numbers the same for those two account groups.
    For issue two:  You need to make sure that all your sales areas are setup and the tuples are defined per sales area.  Once the tuples have been defined and attributes setup on the organizational units, then you can do another initial load and the sales area should come over again.
    Good luck,
    Stephen

  • SQLCE Agent Replication Problem, maybe related to SQL Server not on default port

    I've got a problem getting the SQL Server CE Replication setup on a new server. SQL Server is 2008 R2 but we are running on a non-standard port (not 1433) and I'm not sure where I'd tell the agent that or if I need to. I've turned on full diagnostics and
    set the Level to 3.  Using the diag option on the agent, I can see that is set. 
    In the log file, I get this error: Hr=80004005 ERR:OpenDB failed getting pub version 28627 (rest of log is below)
    when I try to sync
    and the client gets: : 0x80004005
     Message   : Failure to connect to SQL Server with provided connection information. SQL Server does not exist, access is denied because the IIS user is not a valid user on the SQL Server, or the password is incorrect.
     Minor Err.: 29061
    I am pretty certain that the user name / password are correct and I can connect as that user in SQL Server Management Studio. I don't see anything in the SQL Server log file for a failed connection. I do so those if I login through the Management Studio
    without entering the password for example so I believe that is setup.  The Agent is on the same machine as the database server so I don't believe it is a firewall or network error but this is a new machine / setup so I may be missing something in the
    setup. 
     I am not sure exactly what else to look at to try to understand what is going on.
    Agent Log (Partial):
    2014/09/30 19:36:51 Hr=00000000 Compression Level set to  1
    2014/09/30 19:36:51 Hr=00000000 Count of active RSCBs =  0
    2014/09/30 19:36:51 Thread=EC8 RSCB=2 Command=OPWC Hr=00000000 Total Compressed bytes in =  203
    2014/09/30 19:36:51 Thread=EC8 RSCB=2 Command=OPWC Hr=00000000 Total Uncompressed bytes in =  385
    2014/09/30 19:36:51 Thread=EC8 RSCB=2 Command=OPWC Hr=00000000 Responding to OpenWrite, total bytes =  203
    2014/09/30 19:36:51 Thread=EC8 RSCB=2 Command=OPWC Hr=00000000 C:\inetpub\wwwroot\MobileRepService\35.71ACEC98F130_15D653BB-58BD-440A-BE57-C94E24CDCB59 0
    2014/09/30 19:36:51 Thread=137C RSCB=2 Command=SYNC Hr=00000000 Synchronize prepped 0
    2014/09/30 19:37:08 Hr=80004005 ERR:OpenDB failed getting pub version 28627
    2014/09/30 19:37:09 Thread=137C RSCB=2 Command=SCHK Hr=80004005 SyncCheck responding 0
    2014/09/30 19:37:09 Thread=137C RSCB=2 Command=SCHK Hr=00000000 Removing this RSCB 0
    <STATS Period_Start="2014/09/30 19:32:50" Period_Duration="904" Syncs="2" SubmitSQLs="0" RDAPushes="0" RDAPulls="0" AVG_IN_File_Size="385" AVG_OUT_File_Size="0" Completed_Operations="0"
    Incomplete_Operations="2" Total_Sync_Thread_Time="33" Total_Transfer_Thread_Time_IN="0" Total_Transfer_Thread_Time_OUT="0" Total_Sync_Queue_Time="0" Total_Transfer_Queue_Time_IN="0" Total_Transfer_Queue_Time_OUT="0"
    />

    Thanks - that got me past that issue - I was passing the wrong database as the Publisher due to a configuration error (we are bringing up a new publication server and missed changing one of the parameters in a configuration file). I've now got another
    error but if I can't determine what is wrong with that I'll post a different question.

  • HR link replication problems (ALE)

    Dear experts:
    We are facing a problem with HR to CRM Employee replication.
    The replication of some links is not working.
    We detected that the links replication (HRP1001) CP-OS are not working for those employees which are replicated in CRM with an alias (relation 207, u201CIs identical tou201D) with different number from HR personal number.
    I give some examples to illustrate the problem:
    One employee replication which is working well shows in HRP1001 this:
    OTYPE OBJID      PLVAR RSIGN RELAT ISTAT   .....  SCLAS SOBID
    CP    00000477   01    B     207   1         ...   BP    0000000477
    Note that SOBID number and OBJID are the same
    One employee replication which is not working well shows in HRP1001 this:
    OTYPE OBJID      PLVAR RSIGN RELAT ISTAT   .....  SCLAS SOBID
    CP    50000112   01    B     207   1              BP    0000001524
    Note that SOBID number and OBJID are not the same.
    Since the OBJID number 50000112 seems an internally generated number, I looked at number ranges, and I realized that for some reason, the system is giving internal numbers to some employees, taken from the number Object  RP_PLAN, Subobject is 01$$.
    The replication of the links related to the Employees with an alias with internal number is not working, the others are working well.
    Has anyone some idea of how to face this problem?
    Thank you in advance.
    Jordi

    A CAS and eight primaries: just being curious: how many clients are you managing in total?
    See
    http://blogs.technet.com/b/umairkhan/archive/2014/02/18/configmgr-2012-data-replication-service-drs-unleashed.aspx and
    http://blogs.technet.com/b/umairkhan/archive/2014/03/25/configmgr-2012-drs-troubleshooting-faqs.aspx for troubleshooting.
    Torsten Meringer | http://www.mssccmfaq.de

Maybe you are looking for

  • Need help with AOL

    Please help!!! I just got my new iBook G4 yesterday and can't figure out how to get on the internet. I have dial-up and my ISP is AOL. I am very clueless and tried following the instructions, but it is not working. I put in my info, the computer conn

  • File Receiver - Special Char (182) for fieldSeparator

    Hi, I need to create files using the character 182 as columns separator. I read in the help.sap.com that the format for special char is '0xHH'. I put for the fieldseparator the value '0xB2' but the error "Conversion initialization failed with java.la

  • Rollback with rman level 0

    Hello all, I am very new to oracle and i wanted to know if i have a full level 0 rman backup (with archive log ) that i have taken today(2/9/2012 13:00 pm cst).... and now the questions is... is there anyway possible or is oracle is capable of rollin

  • Why does Robohelp 9 crash several times a day?

    Using TCS 3.5 with Framemaker as single source for Webhelp. Robohelp 9 crashes on a regular basis when I update.This happened with Robohelp 8 and now with 9. Very counter-productive! Very annoying. If there is a way to avoid this (other than Adobe fi

  • Cisco Content Engine for Content Filtering

    Hi All, I am looking for a low end solution for Content Filtering and would like to use Cisco Content Engine. 1. The documentation said that Websense, Secure Computing SmartFilter (does not require separate SmartFilter) & N2H2 support is there on the