MDM Exception: Key mapping value must be unique. You cannot overwrite key

"ServerException: Key mapping value must be unique. You cannot overwrite key mapping values."
I received such exception when code tried to manipulate Key Mappings of the record.
error was not happen on mdm 5.5 but on 7.1 it sometimes appears.
What is probable reason of this?
How to resolve this ?
Code is approximate so:
                                     String[] keys = keyMapping.getKeys();
                    if (recordKeyMapping.containsKeyMapping(remoteSystem)) {
                recordKeyMapping.replace(remoteSystem, keys);
          else {
               recordKeyMapping.addKeyMapping(keyMapping);
          //Persist
          targetRecord.update();
//where methods are:
     public void replace(RemoteSystem remoteSystem, String[] keys) {
          KeyMapping keyMapping = getKeyMapping(remoteSystem);
          if (keyMapping == null) throw new IllegalArgumentException("Can't update keys: key mapping for the system '" + remoteSystem + "' is not found");
          else {
               keyMapping.setKeys(keys);
     public void addKeyMapping(KeyMapping keyMapping) {
          RemoteSystem remoteSystem = keyMapping.getRemoteSystem();
          for(int i = 0; i < keyMapping.size(); i++) {
               addKey(remoteSystem, (String) keyMapping.get(i));
Edited by: Vladimir Grigoryev on Oct 5, 2010 11:26 AM

Hello -
I am not sure on that coding part. but is it like are you trying to retrive Key Mapping from Memory accelerator. Here this information always needs to be read from Database.
I am sure you also maintained  the required attribute for key mapping as in Property in Console should be set as "Yes" and other relevant things properly .
Here also go through the below link for more insight from tools perspective.
http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/30843106-5539-2b10-75a9-da483911b0d9
http://help.sap.com/javadocs/mdm/sp06/com/sap/mdm/data/KeyMapping.html
It may help
Rgds
Deep

Similar Messages

  • Syndicating Key mapping value from lookup table

    Hi Experts,
    I want to Syndicating Remote Key value from lookup table as per the remote system.
    In syndicator, if I map destination field to the remote key of the lookup table, I am getting blank value.

    Hi Mrinmoy,
    kindly check in the Data Manger whether have you maintained Remote keys for the lookup table. If yes then choose the specified remote system from Remote key override fields under Map properties in the syndicator.
    Incase you cant find the remote system in the "remote key override" field for which remote key is assigned in Data manager, then you need to check the Type (outbound) of the remote system in Console admin node. Because only those Remote systems type set as Outbound can been found in Remote key Override in the syndicator.
    After choosing the remote key you need to map the destination field with Remote key value as shown in the below image.
    Regards
    Rahul

  • REP-1401: 'no_daysformula':Fatal PL/SQL error occured. ora-06503: PL/SQL : Functio returned without value. REP-0619: You cannot run without a layout.

    Hi everyone.
    Can anyone tell me what is wrong in this code below?
    Code:
    function NO_DAYSFormula return Number is
    begin
      IF TO_CHAR(TO_DATE(:P_FR_DT, 'DD-MM-RRRR'), 'RRRR') =TO_CHAR(TO_DATE(:ACCT_OPN_DT, 'DD-MM-RRRR'), 'RRRR')
      AND :P_TO_DT<:MATURITY_DATE
      AND :ACCT_OPN_DT>:P_FR_DT
      THEN RETURN (:P_TO_DT-:ACCT_OPN_DT+1);
      ELSIF TO_CHAR(TO_DATE(:P_FR_DT, 'DD-MM-RRRR'), 'RRRR') =TO_CHAR(TO_DATE(:ACCT_OPN_DT, 'DD-MM-RRRR'), 'RRRR')
      AND :P_TO_DT<:MATURITY_DATE
      AND :ACCT_OPN_DT<:P_FR_DT
      THEN RETURN (:P_FR_DT-:P_TO_DT+1);
      ELSIF TO_CHAR(TO_DATE(:P_FR_DT, 'DD-MM-RRRR'), 'RRRR') =TO_CHAR(TO_DATE(:ACCT_OPN_DT, 'DD-MM-RRRR'), 'RRRR')
       AND :P_TO_DT>:MATURITY_DATE
       AND :ACCT_OPN_DT<:P_FR_DT
      THEN RETURN (:P_FR_DT-:MATURITY_DATE+1);
      END IF;
    END;
    It gets compiled successfully but when i run the report, i get 2 errors.
    Error 1:
    REP-1401: 'no_daysformula':Fatal PL/SQL error occurred.
    ora-06503: PL/SQL : Function returned without value.
    Error 2:
    REP-0619: You cannot run without a layout.
    Should i use only 1 return statement?
    Can i use as many return statements as i want?
    What is the exact mistake? Please let me know.
    Thank You.

    Let me clear you the first thing...
    If you get any fatal errors while running the report (e.g., function returned without value,no value etc.,) the report will show
    REP-0619: You cannot run without a layout.
    So you just correct the function 'no_daysformula' .
    First of all you run the report without that formula column.
    If it works fine then , Check the return value of your formula column (Your formula column properties --> Return value --> value (It will be DATE as i think so).
    As function will always return a single value, Check your formula 'no_daysformula' returns the same.
    declare a return variable say for example..
    DECLARE
    V_DATE DATE;
    BEGIN
    --YOUR CODE---
    RETURN V_DATE := (RETURN VALUE)
    END;
    Last but not least ... use Else condition to return (NULL or any value ) in your code and check..
    If any Problem persists let me know
    Regards,
    Soofi.

  • SEM BSC - Exception while mapping value fiels to datasource

    Hi All,
    I am facing an Exception error while trying to get the value field mapped with Datasource by clicking the "Refresh Data Sources" in Value field tab of Measure definition of the Balance scorecard.
    The Exception is :
    An exception with the type
    CX_SY_DYN_CALL_ILLEGAL_TYPE occurred, but was
    neither handled locally, nor declared in a RAISING
    clause
    The SEM used is in Netweaver 2004s.
    This exception is not allowing me to restrict values for perticular Characterstics values.

    Hi Marta,
    Your Answer gives some idea about the Exception.
    My SEM-BW release is 600 with no support package.
    Whereas the BW is 700 with SP 14.
    In SEM the Balance scorecard runs fine in SEM-BW release 400 but while dual coding the same in upgraded systemSEM-BW 600 this exceptions occures.
    This doesn't allow me to mapp the value field with the Datsource which is query, checked to act as datasource.
    Please help me out .

  • MDM Key Mapping

    Hello Experts,
    Please help me to understand the Key Mapping concept in context to SAP MDM.
    How the key mapping values get created while importing the data through the import manager and what role key mapping playes while importing the reference or look up data for the first time.
    Thanks.

    Hello Rajan,
      As all of us one of the main functions of MDM is to eliminate duplicates ok?
      now let consider that you have 2 offices one at US and another in India. lets say that you use citi bank both in india and US to do the transcations, of course there are also some other banks that you deal with.
    As you know any transcation that you do with  CITI bank(US) is not same as CITI bank(IND). Which means even though the bank is same these two are 2 different entities.
    know lets built a table for list of banks that you do transcations both in india and US
      BANKS-US
       bank num - 1     bank name - barkalays
                - 2                 south american bank
                - 3                 millenium bank
                - 4                 Citi bank
                - 5                 american express
      Bank - IND
       bank num - 1        bank name - sbi
       bank num - 2                    ING Vysa
                  3                    citi bank
                  4                    punjab national bank
    now when we are import all these data into MDM citi bank will be always a duplicate records. BUT IT IS NOT ACTUALLY DUPLICATE since it record1 of citi bank belongs to US and other belongs to IND.
      So inorder to over come this one what I create tha remote system ( we can create this in remote systems table under admin node ) in MDM where  i name all the data coming from US as BANK-US and from IND as BANK-IND
    now when we import data into MDM we log in into respective remote system for example if I am to import data from US i specify my remote system as BANK-US and similarly with india and import data. So now when  you see the data in data manager you can find only 1 records of CITI bank.
      now if you want to see which citi bank it is right click press "edit key mapping" there you can citi bank record of both the systems.
    Similarly the same concept applys for objects also i.e when similar record belongs to two different objects

  • Remote System and Remote Key Mapping at a glance

    Hi,
    I want to discuss the concept of Remote System and Remote Key Mapping.
    Remote System is a logical system which is defined in MDM Console for a MDM Repository.
    We can define key mapping enabled at each table level.
    The key mapping is used to distinguish records at Data Manager after running the Data Import.
    Now 1 record can have 1 remote system with two different keys but two different records cannot have same remote system with same remote key. So, Remote key is an unique identifier for record for any remote system for each individual records.
    Now whenever we import data from a Remote System, the remote system and remote key are mapped for each individual records. Usually all records have different remote keys.
    Now, when syndicating back the record with default remote key is updated in the remote system that is sent by xml file format.
    If same record is updated two times from a same remote system, the remote key will be different and the record which is latest contains highest remote key.
    Now, I have to look at Data Syndication and Remote key.
    I have not done Data Syndication but my concept tell if there is duplicate record with same remote system but different remote keys both will be syndicated back. But if same record have two remote keys for same remote system then only the default remote key is syndicated back.
    Regards
    Kaushik Banerjee

    You are right Kaushik,
    I have not done Data Syndication but my concept tell if there is duplicate record with same remote system but different remote keys both will be syndicated back.
    Yes, but if they are duplicate, they needs to be merged.
    But if same record have two remote keys for same remote system then only the default remote key is syndicated back.
    This is after merging. So whichever remote key has tick mark in key mapping option(default) , it will be syndicated back.
    Pls refer to these links for better understanding.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/80eb6ea5-2a2f-2b10-f68e-bf735a45705f
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/7051c376-f939-2b10-7da1-c4f8f9eecc8c%0c
    Hope this helps,
    + An

  • Automatic update of Key Mapping

    Hello MDM Experts,
    We have got a legacy system which is the master source of information. Now if my repository has 10 fields for a material,and if 6 fields are coming from the legacy system then the rest of the 4 fields are coming from R/3 systems.so the data has to align together for a material in SAP MDM.
    In the legacy system for a material material number and RState(revision state) is the key where as for R/3 systems only the Material number is the key.There are also no global id's concept in this case.
    We need to investigate the posibility to delete key mapping values from MDM records automatically. I wasn't able to find such a functionality in the standard Import manager functionality. The scenario:
    In R3 side there is a record that is dynamic and always keeps the latest R State(revision state). So when we import that record into MDM we have material number and R State as a key mapping value (ROJ786501/1;R1A)
    When the record in R/3 changes the state (R State is changed) it comes to mdm as a new record with a new key mapping (ROJ786501/1;R1B) and the record which had a key mapping (ROJ786501/1;R1A) do not exist in the R/3 anymore so we need to delete the old key mapping value from the old record in MDM. Is this possible to do in an automated way?
    Would allocate points for helpful solutions
    Best Regards,
    CM

    Hello Michael,
    The material number and the RState in the legacy system are two different fields and RState is the Lookup field as of now. But the combination of both is the key in the legacy system.
    The issue is in the R/3, like there are some material which do not have the Rstate in the RState field but in the basic
    data text for the material there is the Rstate mentioned.And there are few materials where this Rstate is not mentioned even.And in the R/3 side Material number is the key.But We can customize the Idocs by putting one more segement for RState and populate the RState field by pulling the data from the basic data text and.So when there is no RState for a particular material then it would be a NULL and in PI we can replace it by using a '/' or something else.
    Now suppose i have a material ROK12456/1  and RSTATE as Z1A and have imported it to the MDM. Now if the RSTATE for
    ROK12456/1 changes to Z2A and in R3 always keeps the latest R State(revision state). So when we import that record into MDM we have material number and R State as a key mapping value (ROK12456/1;Z1A) When the record in R/3 changes the state (R State is changed) it comes to mdm as a new record with a new key mapping (ROK12456/1;Z2A) and the record which had a key mapping (ROK12456/1/1;R1A) do not exist in the R/3 anymore so we need to delete the old key mapping value from the old record in MDM. Is this possible to have this type of functionality and if yes then can we do in an automated way?
    Best Regards,
    CM
    Edited by: chinmoy mohanty on Feb 7, 2008 10:50 AM
    Edited by: chinmoy mohanty on Feb 7, 2008 11:18 AM

  • Create key mapping using import manager for lookup table FROM EXCEL file

    hello,
    i would like create key mapping while importing the values via excel file.
    the source file containing the key, but how do i map it to the lookup table?
    the properties of the table has enable the creation of mapping key. but during the mapping in import manager, i cant find any way to map the key mapping..
    eg
    lookup table contains:
    Material Group
    Code
    excel file contain
    MatGroup1  Code   System
    Thanks!
    Shanti

    Hi Shanti,
    Assuming you have already defined below listed points
    1)  Key Mapping "Yes" to your lookup table in MDM Console
    2) Created a New Remote System in MDM console
    3) proper rights for your account for updating the remote key values in to data manager through import manager.
    Your sample file can have Material Group and Code alone which can be exported from Data Manager by File-> Export To -> Excel, if you have  data already in Data Manager.
    Open your sample file through Import Manager by selecting  the remote system for which you want to import the Key mapping.
    (Do Not select MDM as Remote System, which do not allows you to maintain key mapping values) and also the file type as Excel
    Now select your Soruce and Destination tables, under the destination fields you will be seeing a new field called [Remote Key]
    Map you source and destination fields correspondingly and Clone your source field code by right clicking on code in the source hierarchy and map it to Remote Key if you want the code to be in the remote key values.
    And in the matching criteria select destination field code as a Matching field and change the default import action to Update NULL fields or UPDATED MAPPED FIELDS as required,
    After sucessfull import you can check the Remote Key values in Data Manager.
    Hope this helps
    Thanks
    Sowseel

  • Caching problem w/ primary-foreign key mapping

    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.

    Tom-
    The first thing that I think of whenever I see a problem like this is
    that the equals() and hashCode() methods of your application identity
    classes are not correct. Can you check them to ensure that they are
    written in accordance to the guidelines at:
    http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
    If that doesn't help address the problem, can you post the code for your
    application identity classes so we can double-check, and we will try to
    determine what might be causing the problem.
    In article <[email protected]>, Tom Landon wrote:
    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Key Field Value in FCC

    Hi Experts,
    I have a scenario in PI, where I have 1 Header, n Data Records and 1 Trailer in the source file. This data is coming in CSV format.
    I am using FCC to convert CSV into XML.
    In the FCC, I have used keyFieldValue parameter. For the Header Record, the Key Field Value is constant "H"; for Trailer Record the key field value is constant "T".
    However for Data Record, the key field value is not constant. The first character of Key field of Data Record will always be "D", but rest of the Alphabets can change.
    Sample File:
    "H","3.04",22/10/2009,16:31:12
    "D2S",21/10/2009,20:00:26,"20044",00666,"S",1
    "D2S",22/10/2009,14:26:20,"20044",00668,"S",1
    "D0S",22/10/2009,08:33:34,"00044",04165,"S",1
    "D0S",22/10/2009,11:59:59,"00044",04166,"S",1
    "T",1393.27,1393.27,8
    Here, first line is Header Line (Key Field Value "H"), Last Line is Trailer Line (Key Field Value "T") and all lines in between Header and Trailer are Data Records (Key Field Value starts with "D). I need to convert this file into XML.
    I have no clue, if this can be converted into XML through FCC.
    Any help will be highly appreciated.
    Regards,
    Varun

    >
    Varun Agarwal wrote:
    > Sample File:
    >
    "H","3.04",22/10/2009,16:31:12
    > "D2S",21/10/2009,20:00:26,"20044",00666,"S",1
    > "D2S",22/10/2009,14:26:20,"20044",00668,"S",1
    > "D0S",22/10/2009,08:33:34,"00044",04165,"S",1
    > "D0S",22/10/2009,11:59:59,"00044",04166,"S",1
    > "T",1393.27,1393.27,8
    >
    > Here, first line is Header Line (Key Field Value "H"), Last Line is Trailer Line (Key Field Value "T") and all lines in between Header and Trailer are Data Records (Key Field Value starts with "D). I need to convert this file into XML.
    >
    > I have no clue, if this can be converted into XML through FCC.
    > Any help will be highly appreciated.
    >
    >
    > Regards,
    > Varun
    Write a simple module. The module will do a replace of the Dxx fields to D (you can use simple regex function for this)
    After the module, use the messagetransformbean to do the FCC for you.
    The module might sound complex, but trust me its a simple logic you need to implement and you can easily do the FCC with the messagetransformbean

  • Suggested alternative to key mapping?

    Hello!
    I started using Abelton live for putting together easily accessible virtual instruments for live performance. For example, if my band performs the entire Dark Side of the Moon album, I can create each song in order of the album with the necessary pianos,synths, and organs for each and easily enable/disable them with the use of key mapping (assigning on screen functions to keys on my laptops keypad, just to be thorough) I have a M-Audio Keystation 61 which is a very simple midi controller and doesn't have any knobs for a lot of on screen control. So the use of key mapping is important for me.
    I like the layout and simplicity of Mainstage and would rather use it for this sort of thing, but without key mapping I have no quick way of (for example) muting and unmuting instruments within each song/performance other than trying to do it with the mouse, which is too sketchy for live performance.
    Also with Abelton, I could assign the volume slider on my midi controller to multiple channels/instruments, i.e. control volume for 2 or 3 instruments at once, mainstage only lets me assign one on screen function per project to a single midi hardware control
    I don't know a ton about Mainstage, so my questions are ...
    1. Is there an alternative to key mapping for mainstage?
    2. Is it possible to assign multiple on screen functions to one hardware control?
    Thanks!

    Hi
    Lootinant wrote:
    1. Is there an alternative to key mapping for mainstage?
    If you are 'short' of buttons/switches on your keyboard controller, you could create screen-controls assigned to (say) the lowest notes on your keyboard, and then map them to Next/Previous patch Actions in order to change Patches in MS.
    Lootinant wrote:
    2. Is it possible to assign multiple on screen functions to one hardware control?
    Multiple mappings are possible via the Screen Control Inspector (or the new Assignments and Mappings Pane). Select the Screen Control, and in the Inspector click the '+" button (top right of the Inspector) to add an additional mapping.
    CCT

  • Change Key Mapping for RWMS handheld application

    Dear Experts,
    When we login to handheld in rwms
    http://hostname:9001/forms/frmservlet?config=rwms1324inst_hh
    we are login with USERID: PAR3214
                               PASSWD:PAR3214
                              FACILITY ID : PR
    Now, if we press CTRL+F4, it is login
    My requirment is to change the login option.  Instead of CTRL+F4, it should be mapped to something else like "SHIFT+F4" particularly for RWMS hand held applications.
    Regards,
    Ratnesh

    Hi Ratnesh
    RWMS 14.1 Install Guide ( http://docs.oracle.com/cd/E12456_01/rwms/pdf/141/rwms-141-ig-05.pdf ) has good details on this:
    Update fmrweb.res for Keymapping
    The fmrweb.res file is used to specify key-mapping for the radio frequency devices that are set up in the formsweb.cfg file.
    1. Depending upon device this file may need to be updated.
    2. The installer places a copy in the directory specified in the formsweb.cfg file for each radio frequency URL that is created
    3. The fmrweb.res file comes with key-mapping of CTRL+<number> to work for function keys by default.
    This fmrweb.res is passed as a parameter in formsweb.cfg as otherparams variable.
    Thanks
    Amod

  • Certificate [Thumbprint SOME THUMBPRINT] issued to 'CLientMachineName' doesn't have private key or caller doesn't have access to private key.

    Hi,    We are trying to get a client to communicate with the primary Config Manager Site System(MP/DP).
    We have a Config Manager Client Template that was setup using this guide. 
    http://technet.microsoft.com/en-us/library/gg682023.aspx
    We have a Client Cert on the primary site system server (primary config manager server)  based on this template and it meets the requirements specified in this document
    http://technet.microsoft.com/en-us/library/gg699362.aspx
             Enhanced Key Usage value must contain
    Client Authentication (1.3.6.1.5.5.7.3.2).   
             Client computers must have a unique value in the Subject Name field or in the Subject Alternative Name field.
             SHA-1and SHA-2 hash algorithms are supported.
             Maximum supported key length is 2048 bits.
    The Cert that we generated for the client meets the same requirements and shows the exact same template id but has a different subject name and alternate name (which is the clients machine name).
    With this setup, we still get the following error
    Certificate [Thumbprint  SOME THUMBPRINT] issued to 'CLientMachineName' doesn't have private key or caller doesn't have access to private key.
    Both the site system and client have the same trusted root cert installed.
    What are we missing or what can we check?    Does the cert check process only need the client certs on both the site system and the client to be from the same template?
    Here is a snippet of the clientidmanagerstartup.log
    <![LOG[HTTPS is enforced for Client. The current state is 63.]LOG]!><time="15:02:32.057+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716" file="ccmutillib.cpp:395">
    <![LOG[Begin searching client certificates based on Certificate Issuers]LOG]!><time="15:02:32.058+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716"
    file="ccmcert.cpp:3833">
    <![LOG[Certificate Issuer 1 [CN=THE_NAME_OFTHE_CA; DC=DOMAIN; DC=LOCAL]]LOG]!><time="15:02:32.058+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716"
    file="ccmcert.cpp:3849">
    <![LOG[Based on Certificate Issuer 'THE_NAME_OFTHE_CA' found Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME']LOG]!><time="15:02:32.082+300" date="03-12-2014" component="ClientIDManagerStartup"
    context="" type="1" thread="716" file="ccmcert.cpp:3931">
    <![LOG[Begin validation of Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME']LOG]!><time="15:02:32.082+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1"
    thread="716" file="ccmcert.cpp:1245">
    <![LOG[Completed validation of Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME']LOG]!><time="15:02:32.085+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1"
    thread="716" file="ccmcert.cpp:1386">
    <![LOG[Completed searching client certificates based on Certificate Issuers]LOG]!><time="15:02:32.085+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716"
    file="ccmcert.cpp:3992">
    <![LOG[Begin to select client certificate]LOG]!><time="15:02:32.085+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716" file="ccmcert.cpp:4073">
    <![LOG[Begin validation of Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME']LOG]!><time="15:02:32.085+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1"
    thread="716" file="ccmcert.cpp:1245">
    <![LOG[Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME' doesn't have private key or caller doesn't have access to private key.]LOG]!><time="15:02:32.086+300" date="03-12-2014" component="ClientIDManagerStartup"
    context="" type="2" thread="716" file="ccmcert.cpp:1372">
    <![LOG[Completed validation of Certificate [Thumbprint SOMETHUMBPRINT_1] issued to 'CLIENTMACHINENAME']LOG]!><time="15:02:32.086+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1"
    thread="716" file="ccmcert.cpp:1386">
    <![LOG[Raising event:
    instance of CCM_ServiceHost_CertRetrieval_Status
        ClientID = "GUID:GUID";
        DateTime = "20140312200232.090000+000";
        HRESULT = "0x87d00283";
        ProcessID = 6380;
        ThreadID = 716;
    ]LOG]!><time="15:02:32.090+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716" file="event.cpp:706">
    <![LOG[Failed to submit event to the Status Agent. Attempting to create pending event.]LOG]!><time="15:02:32.092+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="2" thread="716"
    file="event.cpp:728">
    <![LOG[Raising pending event:
    instance of CCM_ServiceHost_CertRetrieval_Status
        ClientID = "GUID:GUID";
        DateTime = "20140312200232.090000+000";
        HRESULT = "0x87d00283";
        ProcessID = 6380;
        ThreadID = 716;
    ]LOG]!><time="15:02:32.092+300" date="03-12-2014" component="ClientIDManagerStartup" context="" type="1" thread="716" file="event.cpp:761">
    <![LOG[Unable to find PKI Certificate matching SCCM certificate selection criteria. 0x87d00283]
    Thanks Lance

    Hi,
    It seems that there are something wrong with you PKI system.
    Here are some steps for your reference.
    SCCM 2012: Part II – Certificate Configuration
    http://gabrielbeaver.me/2012/08/sccm-2012-part-ii-certificate-configuration/
    Note: Microsoft provides third-party contact information to help you find technical support. This contact information may change without notice. Microsoft does not guarantee the accuracy of this third-party contact information.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Mapping unique elements to a Key Value Pair using XSLT in ESB

    Hi Guys,
    I am in need of a solution for mapping some of the response elements to a a key value pair in my target schema. How could I achieve this. It is very very urgent. How will the XSL look like
    Source
    <Source>
    *<Element1>One</Element1>*
    *<Element2>Two</Element2>*
    <Action>Manage</Action>
    </Source>
    Target
    <Target>
    <Action>Manage</Action>
    <AdditionalData>
    *<KeyValuePair>*
    *<key>Element1</key>*
    *<value>One</value>*
    *</KeyValuePair>*
    *<KeyValuePair>*
    *<key>Element2</key>*
    *<value>Two</value>*
    *</KeyValuePair>*
    </AdditionalData>
    </Target>
    Edited by: user13156113 on May 25, 2010 7:01 AM

    Below is the soultion which I finally did it by myself. Any other solutions would be welcome.
    <ns10:AdditionalData>
    <xsl:for-each select="//node()">
    <xsl:if test="text()">
    <ns16:KeyValuePair>
    <ns16:Key>
    <xsl:value-of select="xp20:upper-case(name(.))"/>
    </ns16:Key>
    <ns16:Value>
    <xsl:value-of select="."/>
    </ns16:Value>
    </ns16:KeyValuePair>
    </xsl:if>
    </xsl:for-each>
    </ns10:AdditionalData>

  • Exception getting while value mapping lookup

    Hi Experts,
         i am getting the following error "<b>Exception:[java.lang.NoClassDefFoundError: com/atlas/mdm/xi/MDMLookup]</b>" while doing Value Mapping Look Up into MDM.
    In mapping i have written an UDF, in that i am calling class & methods which Jar file is available under Imported Archives.
    Any needful answers .. Thanks in advance
    Thanks
    Tiger Wood

    Hi,
    When you create your jar or zip file to be imported into the Imported Archive, make sure the path is present during the jar/zip file creation process.  In your case, the path for the MDMLookup class must have:  com\atlas\mdm\xi\
    If this path is missing then the runtime will not find the class.
    Also, in your UDF, you must either include "com.atlas.mdm.xi.*" in the import text box of the UDF, or you must fully qualify the class when using it, e.g. using com.at.as.mdm.xi.MDMLookup, instead of just MDMLookup.
    Regards,
    Bill

Maybe you are looking for