Loading data : Creation of entries in the Qualified Table

Hi,
I was wondering if anyone has succesfully achieved the following...
I'm looking at the standard materials repository.
- Main Table : Product
- Qualified Table: Location Data. This qualified table single non-qualifier field is a look-up to a flat look-up table, called Plants
The flat table is populated with the valid list of plants, using the check table synchronisation. The Main Table is empty, so is the Location Table.
In order to load products into this repository using the import manager it seems that it is required to "create" entries in the qualified table (Location), one for each plant-code.
Is this true ? I actually would expect the SAP MDM Import Manager client to be able to create the records in both the main table and if needed, the qualified table.  As long as my plant code exists in the "Plants" look-up table, that should be sufficient. It seems not the case..
Thx,
Dirk

The sequence of Import
- Load the Flat Lookup tables. ( in your case Plants)
- Load the Qualified Tables ( in your case load the non qualifiers of the Location Data)
- Load the Main table (in your case Products)
Though MDM allows you to create Lookup values on the fly, I would rather go with this approach.
If you are using Import Manager, you can add the Qualified Table Records in the fly.
But if you are using Import Server, you will NOT be able to add Qualified table records in the fly. I tested this, and I see the files in the ImportX error directory.

Similar Messages

  • How to Down load Data in Application server into the Internal Table

    hi freinds,
    iam having a file in the application server.
    now i need to send the data in the file to the internal table.
    is there any Function Module?
    i need with out using the OPEN DATA SET and CLOSE DATA SET Keywords.
    is there any possible?
    Regard's,
    Ranjith.

    Hi,
    There is no other option for uploading the data from the application server to the internal table without using OPEN DATASET and CLOSE DATASET. Even if you find the FM internal logic in FM uses these keywords to read the data from Application server.

  • Not able to see data in the qualifier table of the main tbl , Data Manager

    Hi,
    I have an issue of not able to see the data of two qualified table after populating them.
    It is in mdm-5.5 ps4.
    When populating data first time ,it shows up in those two table slots in the right side of the Data Manager.
    However subsequently it does not show up in those slots , only by right click on the table and selecting "View/edit", the window pops up where those data shows up.
    However unlike other qualified tables the data does not showup automatically for these two tables.
    Appreciate any suggestion or feedback on this.
    regards,
    -reo

    You may have checked the Filter Check Box next to the Qualified Lookup cell in Data Manager, when the current table is the Main Table.
    You use the Filter Checkbox to limit the qualified table records by the current search selections.
    Secondly, you have see if there are any Qualified Links to the main table record you are viewing.
    If not, create the Qualified links in Data manager, for the main table record and the Qualified Table Record.
    Once this is done, you will see the Display fields of the Qualified table for which the links exists for the given main table record.
    Message was edited by:
            Adhappan Thiagarajan

  • How to select alternate entries from the database table

    Hi Experts,
    can u help me, how to select alternate entries from the database table.
    Thanks

    As there is no concept of sequence (unless there is a field specifically included for the purpose), there is no meaning to "alternate" records.
    What table do you have in mind, what data does it hold, and why do you want alternate records?
    matt

  • Issues dealing with the Qualified tables

    Hi,
    I am facing below issues while dealing with the Qualified tables.
    Issue # 1. Trying to get the data from the Qualified table through the Main table. Qualified table is set as supporting ResultDefinition. I am able to get the value for only the non-qualifier fields but not the qualifier fields.
    Exception: com.sap.mdm.commands.CommandException: com.sap.mdm.internal.protocol.manual.ServerException: Qualifier values are not part of a qualified lookup record
    Issue # 2. Need to define the search on the fields of a Qualified table with Main table in the ResultDefinition. I am able to define the search on qualifier fields but not the non-qualifier fields.
    Exception:com.sap.mdm.commands.CommandException: com.sap.mdm.internal.protocol.manual.ServerException: Field not found
    I would like to know the standard ways to address these issues.
    Thanks,
    Surendra

    Hello,
    I was wondering if this issue is solved now. I'm still facing the problem when executing the RetrieveLimitedQualifierValuesCommand:
    com.sap.mdm.commands.CommandException: com.sap.mdm.internal.protocol.manual.ServerException: Field not found
    Are there other ways to manipulate qualifier field values? Workarounds... alternatives...
    Is it related to the MDM Java API version?  If yes, is there a fix?
    Thanks a lot for your input!
    Regards,
    Pedro

  • Data not getting uploaded in the AQ table.

    Hi All
    We have one issue with the AQ tables.
    The scenario is like we first send data from Oracle to SOA and then to a third party tool.
    For this we first populate data in one custom payload table and then in to one custom AQ table.
    Now the data is getting updated in the paylaod table and XML is getting generated, but it is not getting interfaced to the third party tool.
    Reason : Data is not getting populated in the custom AQ table from where SOA picks the data and interfaces it ti the third party tool.
    Error is a custom one :
    While equeing the data using the below statement
    l_chr_corrid := p_in_event.getvalueforparameter ('CORRID');
    since the CORRID Id is null it gives us the message :Correlation ID has not been set for the payload.
    Part of the code for setting WF parameters :
    l_chr_corrid_val := l_chr_event_name || '.order';
    wf_event.addparametertolist (p_name => l_chr_corrid_name,
    p_value => l_chr_corrid_val,
    p_parameterlist => l_param_list
    l_local_event.setcorrelationid (l_chr_corrid_val);
    l_chr_rule_ret := wf_rule.default_rule (p_in_subscription_guid, l_local_event); -- Returns success
    This is quite urgent , hence kindly help us find a sollution to this issue.
    Thanks in advance
    Trupti

    Looks like you havent populated the correlation id. To make sure this is unique use the conversationID, or GUI, etc.
    Not sure what version you are on. If 10g look at this note (obviously chnage from file to AQ, concept the same):
    http://middleware1.wordpress.com/category/adapters/page/2/
    If 11g you should be able to see the property when you edit the invoke activity, there is a tab called properties.
    cheers
    James

  • Delete all entries from the following tables - Follow-up Activities (oracle)

    Hello,
    I performed a homogeneous system copy of our development BW system with the database (oracle 11.2.0.3) from the BW production system!
    I already start the oracle database and the SAP system in the target system/server (development BW system) and I´m doing some follow-up activities. One of this activities is (at the system copy guide 6.2.3.2 Activities at Database Level) is to delete all entries from the following tables:
    DBSTATHORA, DBSTAIHORA, DBSTATIORA, DBSTATTORA
    I tried to delete them using SQL Plus:
    sqlplus /nolog
    SQL> connect /as sysdba
    SQL> delete from DBSTATTORA;
    delete from DBSTATTORA
    ERROR at line 1:
    ORA-00942: table or view does not exist
    ... and it show me that error message.
    This is strange because when I go to transaction SE14 and check the DBSTATTORA I see that table exist and contain a lot of entries!
    Why this is happened in SQL Plus!? I´m running the correct SQL statement for doing this type of task or not?
    How can I delete the entries of that tables? Can I do that using the transaction SE14?
    Can you help me please?
    Thank you,
    samid raif

    Hello
    sqlplus /nolog
    SQL> connect /as sysdba
    SQL> delete from DBSTATTORA;
    delete from DBSTATTORA
    ERROR at line 1:
    ORA-00942: table or view does not exist
    It doesn't surprise me as you are not mentioning the schema name here. Instead it should be
    delete from SAPSR3.DBSTATTORA;
    Assuming the schema owner is SAPSR3. if the owner is different then replace that with the correct one.
    Regards
    RB

  • Re:How can we delete the concurrent node entry from the FND_NODE table

    HI ,
    11.5.10.2 on Oracle Solaris on SPARC (64-bit) .
    How can we delete the concurrent node entry from the FND_NODES table without running Autoconfig.
    Currently we are having 3 nodes RAC and we are deciding to remove one node from the RAC and all 3 nodes are registered as concurrent node with application but concurrent manager is running only on one node.
    Lot of the custom configuration we did it at application web tire level. If we run the Autoconfig at that time we need to redo those changes again that we are trying to avoid.
    Regards .

    we are trying to avoid to run FND_CLONE.setup_clean because it will delete all the nodes entries from the FND_NODES.If those entries are invalid then they should be deleted.
    Running AutoConfig after purging the table will populate it with the correct entries.
    In order to populate the nodes entries again we need to run autoconfig and it will change the server id in fnd_nodes and then we need to redo the ADI client configuration on users PC and redo all the changes that we made in jserv and webserver.For ADI Clients, you should use the correct server ids which will be populated in the table for you once you run AutoConfig.
    For jserv configuration, you can refer to (Customizing an AutoConfig Environment [ID 270519.1]) to make the preserve all your custom setup/configuration after running AutoConfig.
    If there any custom script to delete only one node from the fnd_node so that we don't need to run autoconfig after that, then I really appreciate.No.
    Thanks,
    Hussein

  • Getting an error while fetching the data and bind it in the Tree table

    Hi All,
    I am getting an error "A navigation paths parameter object has to be defined - " while fetching the data and bind it in the Tree table.
    Please find the code and screenshot below
    var oModel = new sap.ui.model.odata.ODataModel("../../../XXXX.xsodata/", true);
    var oTable = sap.ui.getCore().byId("table");
    oTable.setModel(oModel);
    oTable.bindRows({
        path: "/Parent",
        parameters: {expand: "Children"}
    Can anyone please give me a suggestion to rectify this?
    Thanks in Advance,
    Aravindh

    Hi All,
    Please see the below code. It works fine for me.
    var oController = sap.ui.controller("member_assignment");
    var oModel = new sap.ui.model.odata.ODataModel("../../../services/XXXX.xsodata/", true);
    var Context = "/PARENT?$expand=ASSIGNEDCHILD&$select=NAME,ID,ASSIGNEDCHILD/NAME,ASSIGNEDCHILD/ID,ASSIGNEDCHILD/PARENT_ID";
    var oTable = sap.ui.getCore().byId("tblProviders");
    oModel.read(Context, null, null, true, onSuccess, onError);
    function onSuccess(oEventdata){
        var outputJson = {};
        var p = 0;
        var r = {};
        try {
            if (oEventdata.results){
                r = oEventdata.results;
        } catch(e){
            //alert('oEventdata.results failed');
        $.each(r, function(i, j) {
            outputJson[p] = {};
            outputJson[p]["NAME"] = j.NAME;
            outputJson[p]["ID"] = j.ID;
            outputJson[p]["PARENT_ID"] = j.ID;
            outputJson[p]["DELETE"] = 0;
            var m = 0;
            if (j.ASSIGNEDCHILD.results.length > 0) {
                $.each(j.ASSIGNEDCHILD.results, function(a,b) {
                outputJson[p][m] = { NAME: b.NAME,
                                     ID : b.ID,
                                     PARENT_ID: b.PARENT_ID,
                                     DELETE: 1};
                m++;
            p++;
        var oPM = new sap.ui.model.json.JSONModel();
        oPM.setData(outputJson);
        oTable.setModel(oPM);
    function onError(oEvent){
        console.log("Error on Provider Members");
    oTable.bindRows({
        path:"/"
    Regards
    Aravindh

  • Summarizing entries of the Internal table

    Hi All Experts!!
    I have problem regarding summarizing of the entries of the internal table.I want the amount field of the entries be summed based on 3  fields of table rkst skst rkstar.It means that summation be carried out if the set of these 3  fields are same.If there is different set new entry will be created.
    Here I want to summarize itab gt_yrepost2.
      LOOP AT gt_yrepost2 INTO gwa_yrepost2.
        sum_amount = sum_amount + gwa_yrepost2-amount
    be summed for repeating sets of 3 fields*****
          lwa_sum_yrepost2-amount = sum_amount.
          lwa_sum_yrepost2-kokrs = gwa_yrepost2-kokrs.
          lwa_sum_yrepost2-belnr = gwa_yrepost2-belnr.
          lwa_sum_yrepost2-buzei = gwa_yrepost2-buzei.
          lwa_sum_yrepost2-kstar = gwa_yrepost2-kstar.
          lwa_sum_yrepost2-bukrs = gwa_yrepost2-bukrs.
          lwa_sum_yrepost2-pbukrs = gwa_yrepost2-pbukrs.
          lwa_sum_yrepost2-tcurr = gwa_yrepost2-tcurr.
          lwa_sum_yrepost2-skst = gwa_yrepost2-skst."field1
          lwa_sum_yrepost2-rkst = gwa_yrepost2-rkst."field2
          lwa_sum_yrepost2-rkstar = gwa_yrepost2-rkstar."field3
          lwa_sum_yrepost2-txt = gwa_yrepost2-txt.
          APPEND lwa_sum_yrepost2 TO gt_sum_yrepost2.
    It means that if there is change in set of key fields new entry will be created. Else the entries will be summed to a single entry.In this example gt_sum_yrepost2 contained the summed(summarized) entries.
    The positive thing here is that if for one set of these fields the other values will be same.Hence other fields will take value of any of the repeating entries(with repeated set).
    Please help in this regards..
    Thanks in Advance....
    Prabhas.
    Edited by: PRABHAS jha on Jan 25, 2008 4:34 PM

    Hi Prabhas ,
    You could use Collect statement instead of Append.
    Thanks
    Rekha

  • Number of entries in the internal table for perticular field value

    hi All,
    Is this possible to get the count of records from the internal table for a perticular field value.
    currently my requirement is to get the entries from the internal table which does not have two records for perticular field value (say a = 123) whose status is active (say b = 'X').
    also suggets should use LOOP or DELETE or DESCRIBE for a internal table to ful fill this requirement.
    Thanks in advance.
    Pradeep

    Try like this..
    Create another table itab2 with same structure as itab1 & move the contents of itab1 to itab2
    ITAB2[] = ITAB1[].
    Then delete entries from itab2
    Delete itab2 whete a = '123' and b = 'X'
    Then use Describe statement to get the no of entries
    Describe table itab2 lines v_lines.
    Hope this helps...

  • " No entry in the conversion table for the syntax group" error

    Hi,
    Iu2019m getting an error in the port setup when I try to run an Access Test on the logical directory.
    Definition of path
    TFTS02\INTF\RD1\OUTB\MM\RFC WINDOWS NT missing
    Message no. SG024
    Diagnosis
    There is no entry in the conversion table for the syntax group WINDOWS NT and the logical path
    TFTS02\INTF\RD1\OUTB\MM\RFC.
    System Response
    The system cannot generate a platform-specific path.
    Procedure
    Make an entry for the syntax group WINDOWS NT and the logical Pfad
    TFTS02\INTF\RD1\OUTB\MM\RFC in the conversion table using the FILE transaction.
    Regards,
    Rajiv

    Hi
    the error is simple. The path
    TFTS02\INTF\RD1\OUTB\MM\RFC you are trying to access is not there so create the path in the FILE transaction.
    In file tcode goto NEW ENTRIES and give the path and also make sure that in AL11 the path and the filename you are trying to open exists.
    Thanks & Regards
    Jyo

  • Making entries in the thost table.

    Hi,
    Can you please let me know what is the exact purpose of making an entry in the thost table.
    - is it just to get a symbolic computer name, in case the computer name is too long and use this symbolic name in certain cases (eg: TXCOM)?
    - most importantly I want to know if making an entry in the thost table will <b>improve the response time</b> of an SAP system in any sense.
    Best Regards,
    Samuel

    Hi Ruchit,
    Thanks for the response.
    I had referred this note earlier but it does not provide the answer I am looking for.
    I want to know, will making entries in the <b>thost</b> table help improve response time by any means.
    I am not able to find any info at help.sap.com also.
    Please let me know if any of you have updates on this.
    Best Regards,
    Samuel

  • Weird entry in the exception table for monitorenter/monitorexit

    I've created a simple test class with one method:
    public class TestClass {
         int a = 0;
         public void test() {
              synchronized(TestClass.class) {
                   a = 1;
    }After opening the generated class file with Eclipse I got the next
    result (I've removed the parts that are irrelevant)
    public void test();
    0 ldc <Class TestClass> [1]
    2 dup
    3 astore_1
    4 monitorenter
    5 aload_0 [this]
    6 iconst_1
    7 putfield TestClass.a : int [12]
    10 aload_1
    11 monitorexit
    12 goto 18
    15 aload_1
    16 monitorexit
    17 athrow
    18 return
    Exception Table:
    [pc: 5, pc: 12] -> 15 when : any
    [pc: 15, pc: 17] -> 15 when : any
    I do understand that after the monitor is entered it is very important that it leaves it again with the instruction monitorexit. In normal case where no exceptions occur it will call monitorexit at the address 11. But if an exception occurs it has to make sure monitorexit is called and therefore the exception table defines in the first entry that on any exception the VM jumps to address 15. Until here everything is understandable, but what is the second entry in the exception table telling me? if an exception occurs betweern 15 and 17 then jump to 15 ... doesn't this cause an endless loop?

    Just saw this topic, have you investigated this further? I agree it's weird, and maybe would've gotten more attention in the compiler forum, because it's not really a VM issue...
    The bytecode in your OP also differs from the example given in JVMS chapter "Compiling for the Java Virtual Machine", section 7.14 Synchronization: http://java.sun.com/docs/books/jvms/second_edition/html/Compiling.doc.html#6530
    In particular, the first monitorexit instruction is not covered by the "finally" handler there. In your example (and some produced with javac 1.6.0_01), it is. Now when one of the monitorexit instructions throws, the bytecode emitted by javac will keep retrying monitorexit until it finally succeeds (assumed that this is possible), only to immediately throw an exception that indicates that the monitorexit instruction failed?! I hesitate to call it a paradox for some other reason, but definitely very strange. It's also interesting to note that in all the test classes I looked at (and your example, too) a simple decrement of "end_pc" by 1 in the respective exception table entries would be sufficient to achieve a somewhat sane behaviour...

  • Creates new record in the Qualifier Table while importing.

    Hi Guys,
    When sending new bank details to MDM, then MDM creates a new record in the Bank Detail qualified lookup table. This is OK. But when sending exactly same bank details again to MDM, then MDM creates again a new record in the Bank Detail qualified lookup table. I would expect that it
    first performs the Automap and only if it cannot find a value, it will add a new record!
    Can you guys help me out from this?
    Best Regards
    Devaraj PK

    Hi Devaraj,
    Please check these configuration options of Import Manager
    1. Default Qualified Update - The default setting for whether incoming source subrecords either are appended to, completely replace,or update the set of existing qualified links of a qualified lookup field when updating existing records.
    2. Default Matching Qualifiers - The default setting for which qualifiers to use as matching qualifiers when matching existing qualified links when updating existing records.
    Any operation on the qualified tables will depend on the above two options.
    Reward if found useful
    Regards,
    Jitesh Talreja

Maybe you are looking for