Replication to UME

We are using UME for external portal users, and LDAP for internal portal users.  We would like to update the email address in the UME from the vendor master in our ERP system.  Any ideas how this can best be accomplished?
Thanks,
Kevin

Hi,
     For this, you can use import functionality of the UME. Using this you can import data from the external system. To know more about this function please refer the following link as well.
http://help.sap.com/saphelp_nw2004s/helpdata/en/52/96f03eae11e16be10000000a114084/frameset.htm
Thanks
R.Murali

Similar Messages

  • UME to CUA(ABAP) user data replication for custim attribute.

    Hi All,
    We have planned that Users will be created in portal and from there user data will flow to CUA(ABAP) and from CUA it will flow to r3,BW,CRM ..etc .
    I have configured the UME(portal) so that when ever I create user in Portal it flows to CUA (ABAP). In CUA when I assign a system (system name -  the abap system in which CUA should transport the user data) to the user ,user data flows in respective system (R3) ie user gets created in that system (r3 or BW depending on the assigned system name) .
    UME ---> CUA --- > r3 or bw or CRM etc…
    Now I want to automate the process .I want to assign the system name to the user in UME itself (not in CUA).I have created a custom attribute ‘system’ in CUA .
    Now problem is how to map UME custom attribute of ‘system’ to CUA (ABAP) user attribute ‘system’ . Also please let me know which XML file (data source) I should modify.
    Regards,
    Gyan

    Hi Gyan,
    We have installed NW'04 SP14 with both ABAP and JAVA stacks on this system. We are using datasourceConfiguration_abap.xml as our J2EE
    UME setting. We have found that when we create any user in client 000 from CUA that user is then create in UME. We have three clients in this development system. When we create users from CUA from the other clients in this ABAP system the users do not come into the Java UME. We do not want to create the users in client 000. What is your Java UME setting and how can we get the users in the other clients push to UME.
    Regards,
    Anthony
    Message was edited by: Anthony Bright

  • Replication of Users from portal to R/3

    Hi,
    I would like to develop a portal functionnality that allows replication of users from portal to R/3. I know that the functionnality already exists in portal but we need to do another one because of special requirements. How could I find the java code of the existing functionnality to understand how SAP built it?
    Thanks

    Hi,
         You can check the following link
    <a href="http://help.sap.com/saphelp_nw04/helpdata/en/5b/5d2706ebc04e4d98036f2e1dcfd47d/frameset.htm">UME</a>
    It will give you an idea of UME Architecture.
    See if it helps you.

  • Automatic upload of roles from ECC to portal (UME with LDAP)

    Hi experts,
    This thread reopen the question asked on the following message : automatic upload of roles from BI to portal
    However, it concerns this time "UME with LDAP".
    Problematic :
    SAP Library 04s tells us that is not yet possible to automate role replication (or role assigment replication) from ABAP Based back-end to Netweaver Portal. Only manual process for initial upload is possible.
    Source = http://help.sap.com/saphelp_nw04s/helpdata/en/41/5e4d40ecf00272e10000000a155106/frameset.htm
    Questions :
    1 - Did anyone ever try to implement such an automatic tool ?
    2 - What if I'm not able to write on the Active Directory ? I am still able, at least, to automate role assignment replication from ABAP Based back-end to Netweaver Portal (ie. UME with LDAP) ? Directly from SAP R/3 to EP through UME, without passing through Active Directory since the group field is not maintained in AD.
    Many thanks for your inputs
    Alexis MARTIN

    Hello,
    As I did not read the previous thread I don't know what exactly you are trying to achieve, but I can tell you about what we have done - as far as it is not too late yet.
    We use the portal with integration to a BI system. In the ABAP stack we have lots of roles with menu items for hundreds of reports. We want the users to see these roles in the portal.
    First we have used the role migration tool of the portal to upload these roles. There is a Java API for executing role uploads from code. You need to create a webservice in the java stack to call this api, and can call the webservice from ABAP.
    However it is just a question of time and role size until this will not work at all. Standard role migration is more or less crap, stability is a problem. It also creates a lot of logs in the PCD and thus fills the database with trash. (After a few OSS messages there is now a program for deleting logs + you can turn of logging.) Also upload of larger roles takes up to an hour, and you alwasy have the problem that your portal roles are not up to date during the day.
    When I got completely fed up, I have implemented an own navigation connector. When you log on to the portal it will connect to the ABAP stack via RFC, load the role, and generate the portal menu from it. It uses caching, but on every logon it checks whether the role has been updated in ABAP since the last time it was loaded. It is up to date, faster then PCD navigation, and you need absoluetely no periodical synching at all. I cant even understand why this is not offered by SAP per standard!
    Drawback is that it will of course only work for the menu items, and only menu items with an "URL-type" are supported. I'm prettry sure however that it would be possible to implement a few other types as well.
    Let me know if you are interested in the solution, I can give you a few additional details: oliverDOTsvisztATwienerbergerDOTcom
    Oliver

  • Would NWDI work after the UME Change ?

    Hi All,
    Our Dev Portal UME is going to be changed from local UME DB2 DB to ABAP UME.
    The CMSAdim user is: KABA.
    Would the UME Change affect the SAP NWDI ?
    Would we need to lock KABA in local UME and create it ABAP UME ?
    What would happen if we keep KABA in local UME.
    Please let me know.
    I was thinking of DTR Replication; but we have broken DC in Considilation Environment and the
    Assembled code has error.
    Should we go for DTR Replication to Test Portal before changing the UME in Dev Portal ?
    Our Dev and Test Portal are version SAP NetWeaver 7.0 EHP1 SP6.
    Thanks & Regards
    Kaushik Banerjee

    Hi,
    This is my understanding of how UME works in NWDI, if there is anything wrong do correct me:
    In your case it's LDAP UME for NWDI, and the CMSADMIN user KABA exist in LDAP.
    Now when you change the UME from LDAP to ABAP, here are the things you need to keep in mind:
    1. While configuring NWDI you define CMSADMIN user and password at installation time. ( this need to be changed )
    2. CMS Admin user will be used at the time of Import or Export of Activities in CMS.
    3. When you login to NWDI CMS Web UI you login with your id ( having NWDI. Developer or NWDI.Administrator role assigned )
    4. But when you do an import it internally uses CMS Admin user to do the import.
    So when you change the UME you need to create CMS admin user in the ABAP UME and change the password in the configuration.
    And regarding Replication, my suggestion will be to resolve all the issues before changing the UME.
    Resolve the below first and then make the change to ABAP UME:
    a. Broken DCs
    b. And then do the import once you have everything consistent then make the UME change.
    Check the links below to resolve the broken DC's
    http://help.sap.com/saphelp_nw70/helpdata/en/46/14fa07c15214dce10000000a155369/frameset.htm
    https://wiki.sdn.sap.com/wiki/display/TechTSG/%28NWDI%29Home
    Hope this helps.
    Cheers-
    Pramod

  • Error while creating MV replication group object

    Hi,
    I am getting error while creating replication group object. I tried to create using OEM and SQLPlus
    OEM error
    This error while creating M.V. rep. group object
    There is a table or view named SCOTT.EMP.
    It must be dropped before a materialized view can be created.
    In SQLPLUS
    SQL> CONNECT MVIEWADMIN/MVIEWADMIN@SWEET
    Connected.
    SQL>
    SQL> BEGIN
    2 DBMS_REPCAT.CREATE_MVIEW_REPOBJECT (
    3 gname => 'SCOTT',
    4 sname => 'KARTHIK',
    5 oname => 'emp_mv',
    6 type => 'SNAPSHOT',
    7 min_communication => TRUE);
    8 END;
    9 /
    BEGIN
    ERROR at line 1:
    ORA-23306: schema KARTHIK does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 2
    Please not already I have created KARTHIK schema.

    Arthik,
    I think I know what may have happened.
    As I can see you are trying to create support for an updateable materialized view.
    You have to make sure the name of the schema that owns the materialized view is the same as the schema owner of the master table (at master site).
    From the code you have shown, I bet the owner of table EMP is SCOTT.
    From the other hand, you want to create materialized view EMP_MV under schema KARTHIK that refers to table SCOTT.EMP at master site.
    According to the documentation, the schema name used in DBMS_REPCAT.CREATE_MVIEW_REPOBJECT must be same as the schema that owns the master table.
    Please check the documentation at the link below
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14227/rarrcatpac.htm#i109228
    I tried to reproduce your example in my environment, and I got exactly the same error which actually confirms my assumption that the reason for the error is the fact that you tried to create the materialized view in a schema with different name than the one where master table exists.
    I'll skip some of the steps that I used to create the replication environment.
    I have two databases, DB1.world and DB2.world
    On DB2.world I will generate replication support for table EMP which belongs to user SCOTT
    SQL> conn scott/*****@DB2.world
    Connected.
    SQL>create materialized view log on EMP with primary key;
    Materialized view log created.
    SQL>
    SQL>conn repadmin/*****@DB2.world
    Connected.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPGROUP(
      3         gname => 'GROUPA',
      4         qualifier => '',
      5         group_comment => '');
      6*   END;
    PL/SQL procedure successfully completed.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPOBJECT(
      3         gname => 'GROUPA',
      4         type => 'TABLE',
      5         oname => 'EMP',
      6         sname => 'SCOTT',
      7         copy_rows => TRUE,
      8         use_existing_object => TRUE);
      9*   END;
    10  /
    PL/SQL procedure successfully completed.
    SQL> BEGIN
      2       DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT(
      3         sname => 'SCOTT',
      4         oname => 'EMP',
      5         type => 'TABLE',
      6         min_communication => TRUE);
      7    END;
      8  /
    PL/SQL procedure successfully completed.
    SQL>execute DBMS_REPCAT.RESUME_MASTER_ACTIVITY(gname => 'GROUPA');
    PL/SQL procedure successfully completed.
    SQL> select status from dba_repgroup;
    STATUS                                                                         
    NORMAL                                                                          Now let's create updateable materialized view at DB1. Before that I want to let you know that I created one sample in DB1 user named MYUSER. MVIEWADMIN is Materialized View administrator.
    SQL>conn mviewadmin/****@DB1.world
    Connected.
    SQL>   BEGIN
      2       DBMS_REFRESH.MAKE(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => '',
      5         next_date => SYSDATE,
      6         interval => '/*1:Hr*/ sysdate + 1/24',
      7         push_deferred_rpc => TRUE,
      8         refresh_after_errors => TRUE,
      9         parallelism => 1);
    10    END;
    11  /
    PL/SQL procedure successfully completed.
    SQL>   BEGIN
      3       DBMS_REPCAT.CREATE_SNAPSHOT_REPGROUP(
      5         gname => 'GROUPA',
      7         master => 'DB2.wolrd',
      9         propagation_mode => 'ASYNCHRONOUS');
    11    END;
    12  /
    PL/SQL procedure successfully completed.
    SQL>conn myuser/*****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW MYUSER.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'MYUSER.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'MYUSER',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
      9  /
      BEGIN
    ERROR at line 1:
    ORA-23306: schema MYUSER does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 3 I reproduced exactly the same error message.
    So the problem is clearly in the schema name that owns the materialized view.
    Now lets see if what would happen if I create the MV under schema SCOTT which has the same name as the schema on DB2.world where the master table exists.
    SQL>conn scott/****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW SCOTT.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'SCOTT.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'SCOTT',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
    PL/SQL procedure successfully completed.As you can see everything works fine when the name of the schema owner of the MV at DB1.world is the same as the schema owner of the master table at DB2.world .
    -- Mihajlo
    Message was edited by:
    tekicora

  • Exception while Running a Java Program for UME.

    Hi Experts,
                           I am trying to run a java program to acess UME. Its a sample to create user but all of my samples are throwing this exception in NWDS  (Console).Can anyone tell me why is it throwing this exception .
    com.sap.security.api.UMRuntimeException: UME factory 'com.sap.security.api.IUserFactory' cannot be accessed because UME initialization has not started yet.
    Please check
            UMFactory.isInitialized() before using UME functionality.
        at com.sap.security.api.UMFactory.checkInitialized(UMFactory.java:1019)
        at com.sap.security.api.UMFactory.getUserFactory(UMFactory.java:801)
        at Search.main(Search.java:25)
    Exception in thread "main"
    Thanks in advance
    Somil

    Hi,
    Earlier i faced the same exception when i tried to call the UME form Standalone java application,To resolve this i used stateless session bean approach. i follow the Below Mentioned Steps,
    1:-- create an J2EE -- EJB module project.
    2:-- Create stateless session bean and define the require methods ,implement Local and remote interface methods  in your bean class.
    3:-- refer the UME apis in EJB Projects build path.
    4:-- Call UME in Bean class methods.Build the EJB project and Archive.
    5:--Now create an Enterprise application Project ,Include the EJB module project to your EA Project by right clicking on EA project.Build EA project and EAR file
    6:--Deploy the EAR file on your server.
    7:--Now from your standalone java application include these two projects in Project tab of your Application.
    8:--Lookup the stateless session bean in your java class by its default JNDI name ,Obtain remote or local Home interface,And execute the BEAN Method which deals with UME.
    You can also refer this tutorial in some parts of it UME apis(like IUser,UMFactory) are used in Servelets and EJB ,You can download this tutorial source from SDN for your reference.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/297f35cf-0201-0010-00b2-fe2f3e23d360
    Siddharth

  • Replication of a BP in CRM as a FI Vendor in ECC for Grants Management

    Hi,
    We are implenting SAP CRM 7 with SAP ECC for Grants Management, integrated with FI AP (we're not using PSCD).
    For BP replication we followed the next steps, however something looks it is incorrect because my BDOC still shows errors:
    The middleware settings had been completed between the CRM and the ECC system.
    - Site, Suscription and replication from CRM to SAP ECC are in placed
       -The next replication object are activated:
        -All Business Partners (MESG)   (BUPA_MAIN)
        -All Busines Partner Relationships (MESG) (BUPA_REL)
        -All Business Transactions (MESG)
        -Grantor Program Management
    Also we implemented the next steps:
    1) Define the number ranges for BP groupings in CRM: This number range would be internal in CRM and External in ECC.
    CRM (IMG) -> Customer Relationship Management -> Cross-Application Components -> SAP Business Partner -> Basic Settings ->
    Number Ranges and Groupings
    2) Since the BP would be replicated as a BP in ECC we define the same number ranges in ECC too:
    ERP (IMG) -> Customer Relationship Management -> Cross-Application Components -> SAP Business Partner -> Basic Settings ->
    Define Groupings and Assign Number Ranges
    3) Activate the post-processing framework: (Business processes CVI_02 and CVI_04 in Component AP-MD)
    ERP (IMG) -> Cross-Application Components -> General Application Functions ->Postprocessing Office -> Business Processes->
    Activate Creation of Postprocessing Orders
    4) Activate PPO Requests for Platform Objects in the Dialog:
    ERP (IMG) -> Cross-Application Components -> Master Data Synchronization -> Synchronization Control -> Synchronization
    Control -> Activate PPO Requests for Platform Objects in the Dialog
    Edited by: Lyda Osorio on Oct 9, 2009 7:25 AM

    For CRM I had the following FM activated:
    BPOUT     BUPA     100000     CRM_BUPA_OUTB_RENTED_ADDRESS     X
    BPOUT     BUPA     200000     BUPA_MWX_BDOC_CREATE_MAIN     X
    BPOUT     BUPA     300000     CRM_BUPA_OUTB_MARKETING_ATTR     X
    BPOUT     BUPA     400000     VEND_MWX_CREATE_MAIN_BDOC     X
    BPOUT     BUPA     1000000     BUPA_OUTBOUND_MAIN     X
    BPOUT     BUPR     100000     BUPA_MWX_BDOC_CREATE_REL     X
    BPOUT     BUPX     1000000     MDS_BUPA_OUTBOUND     X
    CLEAR     BUPA     1000000     BUPA_OUTBOUND_CLEAR_FLAGS     X
    CRMIN     BUAG     100000     CRM_BUAG_MWX_PROCESS_EXT_STRUC     X
    CRMIN     BUPA     90100     CRM_BUPA_INBOUND_SET_BUAG_FLAG     X
    CRMIN     BUPA     1000000     BUPA_INBOUND_MAIN_CENTRAL     X
    CRMIN     BUPA     1100000     CRM_BUPA_INBOUND_MAIN_MD     X
    CRMIN     BUPA     1200000     CRM_BUPA_BDOC_MAP_MAIN     X
    CRMIN     BUPA     1400000     CRM_BUPA_KOREA_INBOUND_MAP     X
    CRMIN     BUPA     2000000     ABA_FSBP_INBOUND_MAIN     X
    CRMIN     BUPR     1000000     BUPA_INBOUND_REL_CENTRAL     X
    CRMIN     BUPR     1100000     CRM_BUPA_INBOUND_REL_MD     X
    CRMIN     BUPR     1200000     CRM_BUPA_BDOC_MAP_REL     X
    CRMOU     BUAG     100000     CRM_BUAG_MWX_FILL_EXT_FROM_MEM     X
    CRMOU     BUPA     1000000     BUPA_OUTBOUND_BPS_FILL_CENTRAL     X
    CRMOU     BUPA     1200000     CRM_BUPA_OUTB_BPS_FILL_MD     X
    CRMOU     BUPR     1000000     BUPA_OUTBOUND_BPR_FILL_CENTRAL     X
    CRMOU     BUPR     1200000     CRM_BUPA_OUTB_BPR_FILL_MD     X
    CRMOU     BUPR     1300000     CRM_BUPA_BDOC_BPR_FILL_DATA     X
    EXTR     BUAG     100000     CRM_BUAG_MAIN_GET_ID_LIST     X
    MERGE     BUPA     1000000     MERGE_BUPA_CENTRAL     X
    MERGE     BUPA     2000000     MERGE_BUPA_FINSERV     X
    MERGE     BUPR     1000000     MERGE_BUPR_CENTRAL     X
    PXYIN     BUPA     1000000     BUPA_INBOUND     X
    R3AOU     BUPA     100000     BUPA_MWX_BDOC_UP_CURRSTATE_SET     X
    XIIN     BUPA     1000000     ABA_BUPA_MAP_PROXY_TO_DDIC     X
    XIIN     BUPA     2000000     ABA_FSBP_MAP_PROXY_TO_DDIC     X
    XIIN     BUPA     2100000     ABA_FSBP_MAP_PROXY_TO_DDIC_1     X
    XIIN     BUPR     1000000     ABA_BUPR_MAP_PROXY_TO_DDIC     X
    XIOUT     BUPA     1000000     ABA_BUPA_MAP_DDIC_TO_PROXY     X
    XIOUT     BUPR     1000000     ABA_BUPR_MAP_DDIC_TO_PROXY     X

  • ISE 1.2 CWA with Multiple PSNs - SessionID Replication / Session Expired

    Hi all.
    I have a (2) Policy Services Nodes (PSNs) in an ISE 1.2 deployment running patch 1. We are using Wireless MAB and CWA on 5760 Wireless LAN Controllers running v3.3.3.
    We are hitting an issue wherein a client first passes MAB and then gets redirected to a CWA custom portal. The client then receives a Session Expired message. This seems to be related to the fact that CWA is technically a 2-stage authentication (MAB by the WLC and then CWA by the client). Specifically, it seems to happen when the WLC makes its MAB RADIUS access-request to PSN-1 and then the client comes in to PSN-2 to complete the CWA. This issue does not happen when only one PSN is in use and all authentication traffic (both MAB RADIUS and CWA) is directed at a single PSN.
    Clients resolve the FQDN in the redirect URL using public DNS and a public DNS zone file (call it cwa-portal.example.com). cwa-portal.example.com has two A records for the two PSN nodes. DNS is responding to queries using DNS round-robin.
    I have the PSNs configured in a Node Group for session information replication between PSNs, but this doesn't seem to make a difference in behavior.
    So I ask:
    What is the recommended architecture for CWA when using more than one PSN? It seems that you would need to keep the two authentication flows pinned together so that they both hit the same PSN when using more than one PSN in a deployment. A load balancer balancing on the SessionID string comes to mind (both the RADIUS MAB request and the CWA URL contain this unique per-client SessionID), but that seems terribly overbuilt for a seemingly simple problem. On the other hand, it also seems like using a Node Group setup should easily be able to replicate client SessionIDs to all nodes in the deployment so that this isn't an issue. I.e., if the WLC authenticates MAB on PSN-1, then PSN-1 should tell the Node Group about it such that when the client CWA's on PSN-2, PSN-2 doesn't respond with a Session Expired message.
    Is there any Cisco documentation that talks about this?
    Possibly related:
    https://supportforums.cisco.com/discussion/12131531/ise-12-guest-access-session-expired
    Justin

    Tim,
    Thanks for your reply and confirming my suspicion. Hopefully a future version of ISE will provide automated SessionID synchronization among PSNs so that front-end finagling in a multi-PSN environment won't be necessary.
    For anyone else with this issue who for whatever reason can't implement a load balancer(s), I built an automated EEM applet running on a "watchdog" switch (3750 running 12.2(55)SEE9) using IPSLA tracking that senses when PSN1 is down and then
    modifies an ASA to change its client-facing NAT statement for PSN1 to PSN2
    modifies the primary and HA wireless LAN controllers to change its MAB RADIUS aaa server group to use PSN2
    reverts the ASA and WLCs to using PSN1 when PSN1 is detected up and running again
    The applet ensures the SessionID authentications stay "glued" together so that both WLCs and the client hit the same PSN for both stages of authentication. It's failover only, not a load balancing solution, but it meets our current project's need for an automated HA environment.
    PM me if you want the code. I'm have a little too much going on ATM to sanitize and post it. :)
    Justin

  • MYSQL 55 to Oracle 11gr1 replication(oneway) - can not replicate

    I have setup golden gate replication between mysql 55 and oracle 11gr1. While trying to setup the initial load, I do'nt see any data push to oracle from mysql.
    What could be wrong in my setup.? has anyone tried this kind of setup? i see the the report for replicat and it says data is not replicated. while extract shows that 4 rows from table are taken for insert.

    Hi Stev
    I am trying to do the initial load process
    On source (MYSQL 55)
    database (ggtest and table test)
    table test has
    TEST(COL1 INT)
    manager is running on default port 7809
    manager parameter file is defined with (PORT 7809)
    There is initial load extract (EINI01) -->
    ADD EXTRACT EINI01
    param file
    EXTRACT EINI01
    SOURCEDB [email protected], USERID ggsdev, PASSWORD ggsdev
    RMTHOST 192.168.75.116, MGRPORT 7809
    RMPTTASK REPLICAT, GROUP RINI01
    TABLE ggtest.TEST
    On target oracle 11gr1 (11.1.0.6)
    Database in archive log mode, with minimum supplemental logging.
    replicat process
    ADD REPLICAT RINI01
    Parameter file for replicat
    REPLICAT RINI01
    USERID ggs_owner, PASSWORD ggs_owner ( added in target database)
    ASSUMTARGETDEFS
    SETENV (NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252")
    MAP ggtest.TEST , TARGET GGTEST.TEST;
    -- to start the initial load ( mysql side has 4 rows, oracle has no row )
    on source, i ran the command
    ggsci> start extract eini01
    I can see on report, extract has picked 4 rows for insert, but on replicat side no replication done. No error reported in gserr.log and there was a communication between source side of extract with traget side manager and sunsequently repliacat on target was started by manager and stopped normally. but no rows replicated.
    Between oracle-to-oracle there is no problem. But my actual project is to setup from mysql to oracle (one way replication).
    let me know if you need any other info.
    Thanks
    rafey

  • Snapshot replication slow during purge of master table

    I have basic snapshot/materialized view replication of a big table (around 6 million rows).
    The problem that I run into is that when I run a purge of the master table at the master site (delete dml), the snapshot refresh time becomes slower. After the purge the snapshot refresh time goes back to the normal time interval.
    I had thought that the snapshot does a simple select so any exclusive lock on the table should not hinder the performance.
    Has anyone seen this problem before and if so what has been the workaround?
    The master site and the snapshot site both are 8.1.7.4 and are both unix tru64.
    I don't know if this has any relevence but the master database is rule based while the snapshot site is cost based.
    thanks in advance

    Hello Alan,
    Your problem is, to know inside a table-trigger if the actual DML was caused
    by a replication or a normal local DML.
    One way (I'm practising) to solve (in Oracle 8.1.7) this is the following:
    You can use in the trigger code the functions DBMS_SNAPSHOT.I_AM_A_REFRESH(),
    DBMS_REPUTIL.REPLICATION_IS_ON() and DBMS_REPUTIL.FROM_REMOTE()
    (For details see oracle documentation library)
    For example: a trigger (before insert of each row) at the master side
    on a table which is an updatable snapshot:
    DECLARE
         site_x VARCHAR2(128) := DBMS_REPUTIL.GLOBAL_NAME;
         timestamp_x      DATE;
         value_time_diff     NUMBER;
    BEGIN
    IF (NOT (DBMS_SNAPSHOT.I_AM_A_REFRESH) AND DBMS_REPUTIL.REPLICATION_IS_ON) THEN
    IF NOT DBMS_REPUTIL.FROM_REMOTE THEN
    IF inserting THEN
         :new.info_text := 'Hello table; this entry was caused by local DML';
    END IF;
    END IF;
    END IF;
    END;
    By the way: I've got here at work nearly the same configuration, now in production since a year.
    Kind regards
    Steffen Rvckel

  • Reg: Replication of org from client A To client B

    Hi
    I have created one organization(after download from r/3) in client let us take CLIENT A.
    I want to see the same organization which i had created in client A in CLIENT B
    what should i do for this and what is the replication procedure for this?
    Can anybody clarify this by step by step.
    Thanks & Regards
    Shankar

    Hi Gouri,
    TO get the SAP note : 327908
    go to <u><b>http://service.sap.com/notes</b></u>  
    and search the SAP note over thr.
    Thanx
    Saurabh

  • Help needed for replication environment. atleast get me good URL links.

    Hi friends,
    Our database is having nearly 150 tables and 200 views, 10 database procedures, 5 database triggers, 5 DBMS_JOB, and 15 users are using. it is ORACLE 8i under win 2000 environemnt. daily 5000 new records are inserted into the database.
    now i want to have a standby database, read and write complete replication of master database, in this following scenario.
    all the clients are conneting uisng connect string "terminal", if any thing happens to the "terminal" computer, the users could be able able to connect using "terminal1" lets say this "terminal1" is the stand by database and it should be functional as original.
    any body can give documentation, easy methods, or step by step documentation. we have access to metalink also. but failed to get easy methods to achieve this.
    thanks in advance

    is a very complicated thing you want to do: it is what Oracle call the Standby database or Dataguard - it's part of the Enterprise Edition, but I don't know whether Oracle charge extra for it. If you are not an experienced Oracle DBA (and, forgive me, but some of your other posts suggest you are not) you do not want to try doing this yourself. Pay Oracle's wergild and get their solution for this.
    Replication is really not suitable for this sort of process as it doesn't function well in real-time; this means you cannot guarantee keeping the two databases usably up-to-date. That causes a problem later on when you need to switch to your back-up database. And again when your original database is back online. Plus replication is flaky and has performance costs (voice of experience).
    Cheers, APC

  • Message filtering in propagation process (stream replication environment)

    Hi!
    We have fine configured stream replication in star topology:
    ORCL2 <=> ORCL1 <=> ORCL3
    Where the ORCL1 is "headquarters" and there is no message flow between ORCL2 and ORCL3.
    For some reason we want to filter messages in propagation processes, e.g. DML captured on ORCL1 should be replicated only to ORCL2 or only to ORCL3. There is one propagation process for each "satellite" database.
    To solve this problem I have written function:
    FUNCTION Replicate_Lcr (
    p_lcr IN SYS.lcr$_row_record)
    RETURN VARCHAR2 IS
    which will be making a decision whether to pass the message (return 'Y') or not (return 'N').
    But there is problem: rule is evaluated and function is executed (there is insert into 'stream_log_lcr' table) but value of the expression seems to be 'FALSE' and message (LCR) is not beeing sent to ORCL2 (or to ORCL3).
    When I remove function 'Replicate_Lcr' from propagation rule condition then every message captured by capture process on ORCL1 reaches destination database (ORCL2 or ORCL3).
    The second observation is that, if I run the same code on ORCL2 or ORCL3 then everything seems to be OK: there is insert into 'stream_log_lcr' table and DML captured on ORCL2 (or ORCL3) appears in ORCL1 ("headquarters").
    I suppose that this could be problem with other version of database on "headquarters" (ORCL1) or configuration issues.
    I will appreciate every suggestion.
    Databases:
    ORCL1: 64-bit Windows, ver. 10.2.0.4.0, Windows 2008 server
    ORCL2: 32-bit Windows, ver. 10.2.0.1.0, Windows XP
    ORCL3: 32-bit Windows, ver. 10.2.0.1.0, Windows XP
    SQL code run on ORCL1:
    CREATE TABLE stream_log_lcr (
    data DATE NOT NULL,
    msg SYS.lcr$_row_record NOT NULL
    -- simplified, always return 'Y'
    CREATE OR REPLACE FUNCTION Replicate_Lcr (
    p_lcr IN SYS.lcr$_row_record)
    RETURN VARCHAR2 IS
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    IF p_lcr IS NOT NULL
    THEN
    INSERT INTO stream_log_lcr
    VALUES (SYSDATE,
    p_lcr);
    COMMIT;
    END IF;
    RETURN 'Y';
    END;
    -- create propagation process with above function in rule condition
    BEGIN
    DBMS_STREAMS_ADM.add_schema_propagation_rules
    (schema_name => 'data_schema',
    streams_name => 'primary_to_secondary2',
    source_queue_name => 'strmadmin.capture_primary',
    destination_queue_name => 'strmadmin.from_primary@ORCL2',
    include_dml => TRUE,
    include_ddl => TRUE,
    source_database => 'ORCL1',
    and_condition => ' strmadmin.Replicate_Lcr(:dml) = ''Y'' ',
    inclusion_rule => TRUE,
    queue_to_queue => TRUE);
    END;
    -- check if function 'Replicate_Lcr' was evoked:
    SELECT * FROM stream_log_lcr ORDER BY data;

    hi porzer,
    In Propagation process ( source) also 0 errors. But in apply ( dest ) under statistics under server status is displaying as IDLE. And Coordinator status is APPLYING. In Capture (source) no error. In Applying ( dest ) no error. what else i can do please?
    up to now what i did i am telling:
    I had 2 databases had one table same table. one database i changed the mode to ARCHIVELOG mode. Another database is in NOARCHIVELOG mode only. In first database setup streams i run. And i run manually 2 .sql files and one .dat file like this :
    SQL>@e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_SETUP.sql
    SQL>host e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_exportimport.bat
    SQL>@e:\oracle\product\10.2.0\client_2\sysman\report\OTEST_ADMIN_NON_OMS_startup.sql
    Any thing else i can do? i didn't have metalink registration.i hope i am not boring you.
    Thanks in advance.

  • Com.sap.security.core.ume.service failed. J2EE Engine cannot be started

    Hi,
    I have configured SNC on NetWeaver 7.0 (ABAP+JAVA) System on Windows 2003 Server with MS-SQL 2005 Database.
    After the SNC configuration restarted the Server but the JAVA Server process is going down with EXIT Code -11113. The SNC Configuration is working fine but JAVA is not running. SDM and dispatcher are in green but server process is going gray.
    I have checked the log files under C:\usr\sap\SID\DVEBMGS00\j2ee\cluster\server0\log
    The following is the part of log file.
    #1.5#005056BA6C3F001D0000000F000008D8000489ACAFC86070#1277274683393#com.sap.engine.core.service630.container.ServiceRunner##com.sap.engine.core.service630.container.ServiceRunner#######SAPEngine_System_Thread[impl:5]_71##0#0#Error#1#/System/Server#Java###Core service com.sap.security.core.ume.service failed. J2EE Engine cannot be started.
    [EXCEPTION]
    #1#com.sap.engine.frame.ServiceException: <Localization failed: ResourceBundle='com.sap.engine.frame.KernelResourceBundle', ID='UME initialization failed.', Arguments: []> : Can't find resource for bundle java.util.PropertyResourceBundle, key UME initialization failed.
         at com.sap.security.core.server.ume.service.UMEServiceFrame.start(UMEServiceFrame.java:372)
         at com.sap.engine.frame.ApplicationFrameAdaptor.start(ApplicationFrameAdaptor.java:31)
         at com.sap.engine.core.service630.container.ServiceRunner.startApplicationServiceFrame(ServiceRunner.java:214)
         at com.sap.engine.core.service630.container.ServiceRunner.run(ServiceRunner.java:144)
         at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
         at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
         at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:105)
    Caused by: com.sap.security.core.persistence.datasource.PersistenceException: SNC required for this connection
         at com.sap.security.core.persistence.datasource.imp.R3PersistenceBase.newPersistenceException(R3PersistenceBase.java:178)
         at com.sap.security.core.persistence.datasource.imp.R3PersistenceBase.init(R3PersistenceBase.java:446)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactoryInstance.<init>(PrincipalDatabagFactoryInstance.java:356)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.newInstance(PrincipalDatabagFactory.java:156)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.getInstance(PrincipalDatabagFactory.java:109)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.getInstance(PrincipalDatabagFactory.java:56)
         at com.sap.security.core.InternalUMFactory.initializeUME(InternalUMFactory.java:266)
         at com.sap.security.core.server.ume.service.UMEServiceFrame.start(UMEServiceFrame.java:279)
         ... 6 more
    #1.5#005056BA6C3F001D00000011000008D8000489ACAFC8628E#1277274683393#com.sap.engine.core.Framework##com.sap.engine.core.Framework#######SAPEngine_System_Thread[impl:5]_71##0#0#Fatal#1#/System/Server#Plain###Critical shutdown was invoked. Reason is: Core service com.sap.security.core.ume.service failed. J2EE Engine cannot be started.#
    Please help me to solve the issue.
    Thanks,
    Ajay.

    Hi Tim,
    I have configured using Kerberos library for 32 bit on Net Weaver 7.0 with MS SQL 2005 Server on Windows 2003 Server. I didnt change any thing on JAVA side. I have configured as per the Kerberos configuration steps  as per below URL
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/44/0ebf6c9b2b0d1ae10000000a114a6b/frameset.htm
    The configuration was successful and I am able to login with out asking password, but After the configuration when I have restarted every thing on ABAP side works well but JAVA server process going down with EXIT code -11113. One of the log file contains follows error message.
    com.sap.engine.frame.ServiceException: <Localization failed: ResourceBundle='com.sap.engine.frame.KernelResourceBundle', ID='UME initialization failed.', Arguments: []> : Can't find resource for bundle java.util.PropertyResourceBundle, key UME initialization failed.
         at com.sap.security.core.server.ume.service.UMEServiceFrame.start(UMEServiceFrame.java:372)
         at com.sap.engine.frame.ApplicationFrameAdaptor.start(ApplicationFrameAdaptor.java:31)
         at com.sap.engine.core.service630.container.ServiceRunner.startApplicationServiceFrame(ServiceRunner.java:214)
         at com.sap.engine.core.service630.container.ServiceRunner.run(ServiceRunner.java:144)
         at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
         at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
         at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:105)
    Caused by: com.sap.security.core.persistence.datasource.PersistenceException: SNC required for this connection
         at com.sap.security.core.persistence.datasource.imp.R3PersistenceBase.newPersistenceException(R3PersistenceBase.java:178)
         at com.sap.security.core.persistence.datasource.imp.R3PersistenceBase.init(R3PersistenceBase.java:446)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactoryInstance.<init>(PrincipalDatabagFactoryInstance.java:356)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.newInstance(PrincipalDatabagFactory.java:156)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.getInstance(PrincipalDatabagFactory.java:109)
         at com.sap.security.core.persistence.imp.PrincipalDatabagFactory.getInstance(PrincipalDatabagFactory.java:56)
         at com.sap.security.core.InternalUMFactory.initializeUME(InternalUMFactory.java:266)
         at com.sap.security.core.server.ume.service.UMEServiceFrame.start(UMEServiceFrame.java:279)
         ... 6 more
    [Framework -> criticalShutdown] Core service com.sap.security.core.ume.service failed. J2EE Engine cannot be started.
    Jun 25, 2010 3:05:24 AM             com.sap.engine.core.Framework [SAPEngine_System_Thread[impl:5]_69] Fatal: Critical shutdown was invoked. Reason is: Core service com.sap.security.core.ume.service failed. J2EE Engine cannot be started.
    One of the line says "SNC required for this connection". What does this mean? What else need to be done for JAVA to communicate with ABAP?
    Thanks,
    Ajay.

Maybe you are looking for

  • Win 7 Desk Top theme stuck on Windows Classic

    Somehow my desktop got stuck in Windows Classic. I am unable to change it to any Aero theme. Trouble shooter indicated that my video card is unable to support Aero themes. My lap top was manufactured with Win 7 and it has worked fine until a couple d

  • Imac vs air

    what would be faster on the internet the base air or the aluminum imac 2.0 ghz 1 gb ram and 250 gb hd thanks

  • Export an XML or EDL which will keep the clip's full name

    Greetings, I've been working with R3d (RED EPIC format) footages in premiere cs6 and i need to export an XML or EDL which will keep the clip's full name, in order to use it for further post production (davinci). How do i do it?

  • Opening older files with CS6

    On a Windows platform. I recently installed CS6 and everything appeared to be working fine. I was able to open older CS5.5 docs without any issues. I decided to delete the entire CS5.5 Suite and now I find whenever I go to open any of these old CS5.5

  • Lion won't mount Time Machine backup, but...

    My Time Machine back up to an external drive seemed to have failed. It won't mount on Lion long enough to fix the problem. Other times it just locks up. I dual boot with Vista and it sees the drive and all the files just fine. Any idea of what I can