Should I implement a lookup table?

I'm running a test that requires me to output an N-bit word and input a 50-bit boolean array which I need to compare to the expected 50-bit boolean array. 
How should I implement storing these "expected boolean arrays" for easy comparison in my 'pass/fail' subvi?  

LennyBogzy wrote:
I'm running a test that requires me to output an N-bit word and input a 50-bit boolean array which I need to compare to the expected 50-bit boolean array. 
How should I implement storing these "expected boolean arrays" for easy comparison in my 'pass/fail' subvi?  
Store it as an U64, then convert to bool array in the program.
/Y
LabVIEW 8.2 - 2014
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
G# - Free award winning reference based OOP for LV

Similar Messages

  • Lookup table

    Hi,
    Situation:
    I have a series of data in an arraylist. The user may input several variables and according to these variables, a calculation will be performed on the data in the arraylist. E.g.
    array = [1 2 3 4 5 6] //fixed variables of which data 1 is represented by x =10, data 2 by 20, data 3 by 30 and so on.
    The user input 10 30 50 ,....e.g.
    The program will wait until x = 10, then a pointer (A) should be placed at 1 in the array. Result (1) stored in an another array(lets say Parray).
    When x= 20, pointer is placed at 20. Result 2 stored in Parray.
    When x = 30, pointer (A) is placed at 3 and since user input is 30 another pointer((B) is placed at 1. We now have 2 pointers. The content of one is 3 and the other is 1. A multiplication is performed and the result is placed in Parray which now equals [1 2 3 ].
    When x = 40 , pointer A is placed at 4 and pointer B is placed at 2, multiplication is performed and result is stored in Parray which is [1 2 3 8].
    When x = 50, pointer A is at 5 and pointer B is at 3. Since user input is 50 another pointer C is placed at 1 in array. A multiplication is now done with the contents of all the pointers and the result is placed in Parray = 1 2 3 8 15. This process continues until all the user inputs have been exhausted.
    In C I can create dynamic pointers. In Java, as far as I know I can't.
    I tried to use arraylist to implement the problem. From a design point of view, I cannot find any suitable and efficient way of overcoming the problem in Java.
    Could you please provide a high level approach on how I could implement a lookup table with a series of pointers moving in an array? If this is not possible, could you advise an alternative and efficient approach to solve the problem in Java?

    Upon reading the description again, I think I understand the problem. I have no idea where pointers (apparently in the C-sense) come into it. As far as I can tell, A and B simply contain array indexes.
    Please allow me to paraphrase the problem to see if I got it right:
    There is a lookup table which contains 6 values:int[] lookup = new int[] { 1, 2, 3, 4, 5, 6 };There is a user input that consists of a sequence of numbers: 10, 20, 30, 40, 50
    The numbers affect two values A and B, which represent indexes in the lookup table.
    For input 10, A = 0, B = (not used)
    For input 20, A = 1, B = (not used)
    For input 30, A = 2, B = 0
    For input 40, A = 3, B = 1
    For input 50, A = 4, B = 2
    (the above could be implemented using a "switch" statement)
    The result of the operation is to multiply the values found at index A and B in the lookup table and put them in a result "array" which should grow dynamically based on the number of inputs.
    To the OP: is this an accurate representation of your problem?
    If so, I see only one "dynamic" data structure here: the results "array". Using an ArrayList would indeed solve that problem (ArrayList is basically a dynamically growing array anyway).private int[] lookup = new int[] { 1, 2, 3, 4, 5, 6 };
    private List<Integer> results = new ArrayList<Integer>;
    public static void main(String[] args) {
      processInput(args);
    private void processInput(String[] args) {
      for(String input:args) { // assuming the program is called like "java example.Lookup 10 30 50"
        int result = calculateResult(input);
        results.add(result);
    private int calculateResult(String input) {
      int indexA = -1;
      int indexB = -1;
      if(input.equals("10")) {
       indexA = 0;
      else if(input.equals("30")) {
       indexA = 2;
       indexB = 0;
      return multiplyValuesAt(indexA, indexB);
    private int multiplyValuesAt(int indexA, int indexB) {
      int valueA = lookup[indexA]; // assuming indexA is always set to a legal value
      int valueB = 1; // multiply by 1 if indexB < 0
      if(indexB >= 0) {
       valueB = lookup[indexB];
      return = valueA * valueB;
    }

  • Creating Lookup table in Labview 7.1

    For a particular input voltage, the output is distance in cm. For example if the the voltage is 6.2mV the output should be 2.5cm, for 6mV the output is 6cm. Whenever an input voltage between any of these values is given, the output should be interpolated and displayed in cm. I would like to maintain a lookup table for this purpose in Labview 7.1. The graph between input and ouput is a linear graph. I would to know how to create a lookup table and configure it in Labview 7.1..

    Well since it is still AE Week here is an AE implementation of a LUT
    Inititalize it with an array of raw values along with an array of the tranlsated values.
    Use the Lookup action to translate raw to tranalsted.
    Itr uses the threshold and interpolate VI to do the translation.
    Ben
    Message Edited by Ben on 04-12-2007 12:45 PM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    LUT.JPG ‏42 KB
    Look-up_Table.vi ‏38 KB
    Actions.ctl ‏7 KB

  • PowerPivot - Create a Lookup Table and Calculate Totals via PowerQuery

    Hi,
    I have got a question to powerpivot/powerquery.
    I have got one source file "product-sku.txt" with product data (product number, product size, product quantity etc.).
    In powerpivot I created via this text file 2 powerpivot tables:
    product-sku and
    products.
    The "products" table is a lookup table and was created via powerquery using the columns prodnumber, removing the prodsize and the prodquantity columns and then removing duplicates.
    My question: How could I show/leave a column prodquantity in the lookup table "products" which shows the total of all sizes per prodnumber?
    I need this prodquantity in the lookup table to do a banding analysis via the "products" table (e.g. products with quantity 0-100, 101-200 etc.). 
    I give you an example:
    source file columns (product-sku.txt):
    Source 
    Date
    ProdNumber
    ProdSize
    ProdQuantity
    ProdGroup
    ProdSubGroup
    ProdCostPrice
    ProdSellingPrice
    The powerpivot table "product-sku" contains all columns from the txt file above
    The lookup table "products" created via powerquery has the following columns:
    Source
    Date
    ProdNumber
    ProdQuantity (this column I would wish to add; if a prodnumber 123 had two sizes (36 and 38) with quantities of 5 and 10, the prodquantity should add up in the lookup table to 15. How could this be achieved?)
    I enclose a link to my dropbox with example files: PowerPivot-Example-Files
    Thank you for any help.
    Chiemo

    Chiemo,
    If you would like to consolidate to one table as Olaf has suggested, that would be very easy to do. I have included the modified DAX for the calculated column below. This calculated column would be created in the 'product-sku' table itself.
    You are correct in your assumption that you need an explicitly calculated column to most easily do banding analysis.
    Olaf is correct that avoiding the creation of a separate 'products' table as you have done is a good idea. I was not thinking about modeling best practices when I replied. If the only purpose of your 'products' table was to create this calculated column,
    then I do suggest deleting that table and implementing the calculated column in 'product-sku' with the DAX below.
    Edit: If you need to use 'products' as a dimension table which will have a relationship to a fact table, then it will be necessary to keep it. PowerPivot does not natively handle a many-to-many relationship. Dimension tables must have a unique key. If [ProdNumber]
    is the key, then it will be necessary to have your 'products' table. If you need to implement a many-to-many relationship, please see this
    post as a primer.
    =
    CALCULATE (
        SUM ( 'product-sku'[ProdQuantity] ),
        'product-sku'[ProdNumber] = EARLIER ( 'product-sku'[ProdNumber] )

  • Range based Flex field validation based on a Lookup table

    Hi all,
    I am trying to create a validation in one of the flex field under HRMS Application. Table: PAY_ELEMENT_ENTRIES_F
    Validation is pretty simple - but i am struggling to implement.
    Assume there exists two flex fields;
    Field1 contains State Information
    Field2 contains Percentage Information
    The above two values would be entered by the user.
    the validation should be like this.
    Field1 - State code entered by User
    Field2 - We have a seperate Lookup where we have setup lots of State specific information. Assume ATTRIBUTE3 and ATTRIBUTE4 defines the Min and Max range which would be configured during the Setup. User should be entering any percentage value between the ATTRIBUTE3 and ATTRIBUTE4.
    I have created a Table Validaiton with select 1 from dual with the following where clause.
    exists ( select null from fnd_common_lookups l, fnd_sessions sess
    where
    l.lookup_type like 'CUSTOM_US_STATE_RULES'
    and sess.session_id= userenv('sessionid')
    and sess.effective_date between l.start_date_active
    and NVL(l.end_date_active, sess.effective_date)
    *and l.attribute1 ='01' -- consider 01 state alone for the time being*
    and :$FLEX$.ENTRY_INFORMATION4 between to_number(l.attribute4) and to_number(l.attribute5)
    When i compile the flexfileld, it errors out stating invalid reference to ENTRY_INFORMATION4.
    ENTRY_INFORMATION4 is the field where i am going to attach this validation
    How do i validate a value of the flexfield against the range of values available in another table(in this case a lookup table) ?
    Any ideas on how to implement this.
    Edited by: vaibhav468 on Sep 29, 2008 3:05 PM

    Thanks so much for the reply. apparently the solution which you have suggested may not work 100% as the data entry also happens via API. Its mentioned in the doc that the special validations happen only via Forms.
    But I implemented it in a crude way. But it works though !!!!
    Based on the value which i enter in the first Field1, the value set which i am using it in second Field2(its a table based value set)
    I generated all possible percentages and displaying it.
    My table is: (select trim(to_char(rownum/100,'990D99')) pct from fnd_columns a where rownum<=10000) a, fnd_common_lookups l
    My Where clause:
    where l.lookup_type='CUSTOM_US_STATE_RULES'
    and l.attribute1=substr(:ENTRY.USER_ENTRY4,1,2)
    and a.pct between l.attribute4 and l.attribute5
    I generated the set of sequence numbers with the help of rownum from a table which definitely contains more than 10000 rows.
    I was so glad that FND allowed me to use a INLINE view in the Validation Table not restricting the tables available for that application alone.
    thanks again.

  • Multiple columns (named the same originally) and mapped to the same lookup table are causing a Cube Build issue

    Hey folks, looking for some insight here.
    I've an implementation that contains some custom Enterprise columns mapped to lookup tables.  In the instance I'm working with now, it looks like there was/is an issue with one of those columns.  In this scenario, I have a column named
    ProjectType, created initially with that name, mapped to a lookup table.  This field's name was then changed to
    Project Type.  After that, it looks like another column was created, also called
    ProjectType.  So now, we have what I would have originally thought was two distinct columns, even though the names used are the same.
    Below is the error we're currently getting during the Cube Build Process...
    PWA:http://ps2010/PWA, ServiceApp:Project Web App, User:DOMAIN\user, PSI: SqlException occurred in DAL:  <Error><Class>1</Class><LineNumber>1</LineNumber><Number>4506</Number><Procedure>MSP_EpmProject_OlapView_B8546719-4D4C-473A-84B1-89DEDA2307E0</Procedure> 
    <Message>  System.Data.SqlClient.SqlError: Column names in each view or function must be unique. Column name 'ProjectType' in view or function 'MSP_EpmProject_OlapView_B8546719-4D4C-473A-84B1-89DEDA2307E0' is specified more than once.  </Message> 
    <CallStack>   
     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)   
     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)   
     at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)   
     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)   
     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)   
     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)   
     at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)   
     at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()   
     at Microsoft.Office.Project.Server.DataAccessLayer.DAL.SubDal.ExecuteStoredProcedureNoResult(String storedProcedureName, SqlParameter[] parameters)  </CallStack>  </Error>
    I've tried deleting the one column, but the build still gives the above error.
    Any thoughts as to how the above could be resolved?
    Thanks! - M
    Michael Mukalian | Jan 2010 - Dec 2010 MVP SharePoint Services | MCTS: MOSS 2007 Configuration | http://www.mukalian.com/blog

    We tried taking it out of the cubes, and it builds fine.  The challenge we're having is in building the cubes with that custom field "ProjectType".  It's as if the cubes still hold some reference to it even when it's deleted.
    Since the OLAP View ('MSP_EpmProject_OlapView_{guid}') is recreated, would it be as simple as deleting that View, and trying to recreate?
    Thanks - M
    Michael Mukalian | Jan 2010 - Dec 2010 MVP SharePoint Services | MCTS: MOSS 2007 Configuration | http://www.mukalian.com/blog

  • Distortion Correction with Lookup Table

    Hello,
    I am trying to run an image through lookup tables in order to correct some distortion.
    I have two 2D lookup tables "LUTsXnew.raw" and "LUTsYnew.raw" which locate the new (x,y) coordinates that i want to move the pixel intensities from original image "writing_final_D.raw" into the corrected image.
    However, Labview takes too much time (12 seconds), if you have some experience with arrays and images can you please have a look at my code and tell me what i am doing wrong?
    I am running the same algorithm in C on the same pc and it takes less than 1second.
    Thanks,
    CK
    PS: I am attaching the VI plus the required raw images
           Please ignore the resulting image, i just care about speed for the moment
    Attachments:
    images1.zip ‏287 KB
    distortion correction_set pixel1.vi ‏81 KB

    CK,
    I cannot run your VI because Vision is not supported on my platform. So my comments are general rather than specific.
    1. Your while loops can be replaced with for loops because you know the number of iterations in advance.
    2. If you transpose the arrays before entering the loops, you can use autoindexing rather than using Index array in the inner loop.
    3. If you only update the inner and outer iteration count indicators every 100 iterations you will not have displays that are meaningless to the user using up processor time. Eliminate them completely for the fastest performance.
    4. Avoid coercion inside the loops. The IMAQ Set Pixel Value function has coercion dots on three inputs. Doing explicit conversions outside the loops may be faster.
    5. The Tick Count functions should be in frames of their own to assure accurate timing.
    6. Local variables are not needed and are generally slower than wires.
    Lynn
    Attachments:
    distortion correction_set pixel1.2.vi ‏69 KB

  • Creating a single context index on a one-to-many and lookup table

    Hello,
    I've been successfully setting up text indexes on multiple columns on the same table (using MULTI_COLUMN_DATASTORE preferences), but now I have a situation with a one-to-many data collection table (with a FK to a lookup table), and I need to search columns across both of these tables. Sample code below, more of my chattering after the code block:
    CREATE TABLE SUBMISSION
    ( SUBMISSION_ID             NUMBER(10)          NOT NULL,
      SUBMISSION_NAME           VARCHAR2(100)       NOT NULL
    CREATE TABLE ADVISOR_TYPE
    ( ADVISOR_TYPE_ID           NUMBER(10)          NOT NULL,
      ADVISOR_TYPE_NAME         VARCHAR2(50)        NOT NULL
    CREATE TABLE SUBMISSION_ADVISORS
    ( SUBMISSION_ADVISORS_ID    NUMBER(10)          NOT NULL,
      SUBMISSION_ID             NUMBER(10)          NOT NULL,
      ADVISOR_TYPE_ID           NUMBER(10)          NOT NULL,
      FIRST_NAME                VARCHAR(50)         NULL,
      LAST_NAME                 VARCHAR(50)         NULL,
      SUFFIX                    VARCHAR(20)         NULL
    INSERT INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME) VALUES (1, 'Some Research Paper');
    INSERT INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME) VALUES (2, 'Thesis on 17th Century Weather Patterns');
    INSERT INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME) VALUES (3, 'Statistical Analysis on Sunny Days in March');
    INSERT INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME) VALUES (1, 'Department Chair');
    INSERT INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME) VALUES (2, 'Department Co-Chair');
    INSERT INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME) VALUES (3, 'Professor');
    INSERT INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME) VALUES (4, 'Associate Professor');
    INSERT INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME) VALUES (5, 'Scientist');
    INSERT INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX) VALUES (1,1,2,'John', 'Doe', 'PhD');
    INSERT INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX) VALUES (2,1,2,'Jane', 'Doe', 'PhD');
    INSERT INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX) VALUES (3,2,3,'Johan', 'Smith', NULL);
    INSERT INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX) VALUES (4,2,4,'Magnus', 'Jackson', 'MS');
    INSERT INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX) VALUES (5,3,5,'Williard', 'Forsberg', 'AMS');
    COMMIT;I want to be able to create a text index to lump these fields together:
    SUBMISSION_ADVISORS.FIRST_NAME
    SUBMISSION_ADVISORS.LAST_NAME
    SUBMISSION_ADVISORS.SUFFIX
    ADVISOR_TYPE.ADVISOR_TYPE_NAME
    I've looked at DETAIL_DATASTORE and USER_DATASTORE, but the examples in Oracle Docs for DETAIL_DATASTORE leave me a little bit perplexed. It seems like this should be pretty straightforward.
    Ideally, I'm trying to avoid creating new columns, and keeping the trigger adjustments to a minimum. But I'm open to any and all suggestions. Thanks for for your time and thoughts.
    -Jamie

    I would create a procedure that creates a virtual document with tags, which is what the multi_column_datatstore does behind the scenes. Then I would use that procedure in a user_datastore, so the result is the same for multiple tables as what a multi_column_datastore does for one table. I would also use either auto_section_group or some other type of section group, so that you can search using WITHIN as with the multi_column_datastore. Please see the demonstration below.
    SCOTT@orcl_11gR2> -- tables and data that you provided:
    SCOTT@orcl_11gR2> CREATE TABLE SUBMISSION
      2  ( SUBMISSION_ID           NUMBER(10)          NOT NULL,
      3    SUBMISSION_NAME           VARCHAR2(100)          NOT NULL
      4  )
      5  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE ADVISOR_TYPE
      2  ( ADVISOR_TYPE_ID           NUMBER(10)          NOT NULL,
      3    ADVISOR_TYPE_NAME      VARCHAR2(50)          NOT NULL
      4  )
      5  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE SUBMISSION_ADVISORS
      2  ( SUBMISSION_ADVISORS_ID      NUMBER(10)          NOT NULL,
      3    SUBMISSION_ID           NUMBER(10)          NOT NULL,
      4    ADVISOR_TYPE_ID           NUMBER(10)          NOT NULL,
      5    FIRST_NAME           VARCHAR(50)          NULL,
      6    LAST_NAME           VARCHAR(50)          NULL,
      7    SUFFIX                VARCHAR(20)          NULL
      8  )
      9  /
    Table created.
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME)
      3    VALUES (1, 'Some Research Paper')
      4  INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME)
      5    VALUES (2, 'Thesis on 17th Century Weather Patterns')
      6  INTO SUBMISSION (SUBMISSION_ID, SUBMISSION_NAME)
      7    VALUES (3, 'Statistical Analysis on Sunny Days in March')
      8  SELECT * FROM DUAL
      9  /
    3 rows created.
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME)
      3    VALUES (1, 'Department Chair')
      4  INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME)
      5    VALUES (2, 'Department Co-Chair')
      6  INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME)
      7    VALUES (3, 'Professor')
      8  INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME)
      9    VALUES (4, 'Associate Professor')
    10  INTO ADVISOR_TYPE (ADVISOR_TYPE_ID, ADVISOR_TYPE_NAME)
    11    VALUES (5, 'Scientist')
    12  SELECT * FROM DUAL
    13  /
    5 rows created.
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX)
      3    VALUES (1,1,2,'John', 'Doe', 'PhD')
      4  INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX)
      5    VALUES (2,1,2,'Jane', 'Doe', 'PhD')
      6  INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX)
      7    VALUES (3,2,3,'Johan', 'Smith', NULL)
      8  INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX)
      9    VALUES (4,2,4,'Magnus', 'Jackson', 'MS')
    10  INTO SUBMISSION_ADVISORS (SUBMISSION_ADVISORS_ID, SUBMISSION_ID, ADVISOR_TYPE_ID, FIRST_NAME, LAST_NAME, SUFFIX)
    11    VALUES (5,3,5,'Williard', 'Forsberg', 'AMS')
    12  SELECT * FROM DUAL
    13  /
    5 rows created.
    SCOTT@orcl_11gR2> -- constraints presumed based on your description:
    SCOTT@orcl_11gR2> ALTER TABLE submission ADD CONSTRAINT submission_id_pk
      2    PRIMARY KEY (submission_id)
      3  /
    Table altered.
    SCOTT@orcl_11gR2> ALTER TABLE advisor_type ADD CONSTRAINT advisor_type_id_pk
      2    PRIMARY KEY (advisor_type_id)
      3  /
    Table altered.
    SCOTT@orcl_11gR2> ALTER TABLE submission_advisors ADD CONSTRAINT submission_advisors_id_pk
      2    PRIMARY KEY (submission_advisors_id)
      3  /
    Table altered.
    SCOTT@orcl_11gR2> ALTER TABLE submission_advisors ADD CONSTRAINT submission_id_fk
      2    FOREIGN KEY (submission_id) REFERENCES submission (submission_id)
      3  /
    Table altered.
    SCOTT@orcl_11gR2> ALTER TABLE submission_advisors ADD CONSTRAINT advisor_type_id_fk
      2    FOREIGN KEY (advisor_type_id) REFERENCES advisor_type (advisor_type_id)
      3  /
    Table altered.
    SCOTT@orcl_11gR2> -- resulting data:
    SCOTT@orcl_11gR2> COLUMN submission_name FORMAT A45
    SCOTT@orcl_11gR2> COLUMN advisor      FORMAT A40
    SCOTT@orcl_11gR2> SELECT s.submission_name,
      2           a.advisor_type_name || ' ' ||
      3           sa.first_name || ' ' ||
      4           sa.last_name || ' ' ||
      5           sa.suffix AS advisor
      6  FROM   submission_advisors sa,
      7           submission s,
      8           advisor_type a
      9  WHERE  sa.advisor_type_id = a.advisor_type_id
    10  AND    sa.submission_id = s.submission_id
    11  /
    SUBMISSION_NAME                               ADVISOR
    Some Research Paper                           Department Co-Chair John Doe PhD
    Some Research Paper                           Department Co-Chair Jane Doe PhD
    Thesis on 17th Century Weather Patterns       Professor Johan Smith
    Thesis on 17th Century Weather Patterns       Associate Professor Magnus Jackson MS
    Statistical Analysis on Sunny Days in March   Scientist Williard Forsberg AMS
    5 rows selected.
    SCOTT@orcl_11gR2> -- procedure to create virtual documents:
    SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE submission_advisors_proc
      2    (p_rowid IN           ROWID,
      3       p_clob     IN OUT NOCOPY CLOB)
      4  AS
      5  BEGIN
      6    FOR r1 IN
      7        (SELECT *
      8         FROM      submission_advisors
      9         WHERE  ROWID = p_rowid)
    10    LOOP
    11        IF r1.first_name IS NOT NULL THEN
    12          DBMS_LOB.WRITEAPPEND (p_clob, 12, '<first_name>');
    13          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (r1.first_name), r1.first_name);
    14          DBMS_LOB.WRITEAPPEND (p_clob, 13, '</first_name>');
    15        END IF;
    16        IF r1.last_name IS NOT NULL THEN
    17          DBMS_LOB.WRITEAPPEND (p_clob, 11, '<last_name>');
    18          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (r1.last_name), r1.last_name);
    19          DBMS_LOB.WRITEAPPEND (p_clob, 12, '</last_name>');
    20        END IF;
    21        IF r1.suffix IS NOT NULL THEN
    22          DBMS_LOB.WRITEAPPEND (p_clob, 8, '<suffix>');
    23          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (r1.suffix), r1.suffix);
    24          DBMS_LOB.WRITEAPPEND (p_clob, 9, '</suffix>');
    25        END IF;
    26        FOR r2 IN
    27          (SELECT *
    28           FROM   submission
    29           WHERE  submission_id = r1.submission_id)
    30        LOOP
    31          DBMS_LOB.WRITEAPPEND (p_clob, 17, '<submission_name>');
    32          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (r2.submission_name), r2.submission_name);
    33          DBMS_LOB.WRITEAPPEND (p_clob, 18, '</submission_name>');
    34        END LOOP;
    35        FOR r3 IN
    36          (SELECT *
    37           FROM   advisor_type
    38           WHERE  advisor_type_id = r1.advisor_type_id)
    39        LOOP
    40          DBMS_LOB.WRITEAPPEND (p_clob, 19, '<advisor_type_name>');
    41          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (r3.advisor_type_name), r3.advisor_type_name);
    42          DBMS_LOB.WRITEAPPEND (p_clob, 20, '</advisor_type_name>');
    43        END LOOP;
    44    END LOOP;
    45  END submission_advisors_proc;
    46  /
    Procedure created.
    SCOTT@orcl_11gR2> SHOW ERRORS
    No errors.
    SCOTT@orcl_11gR2> -- examples of virtual documents that procedure creates:
    SCOTT@orcl_11gR2> DECLARE
      2    v_clob  CLOB := EMPTY_CLOB();
      3  BEGIN
      4    FOR r IN
      5        (SELECT ROWID rid FROM submission_advisors)
      6    LOOP
      7        DBMS_LOB.CREATETEMPORARY (v_clob, TRUE);
      8        submission_advisors_proc (r.rid, v_clob);
      9        DBMS_OUTPUT.PUT_LINE (v_clob);
    10        DBMS_LOB.FREETEMPORARY (v_clob);
    11    END LOOP;
    12  END;
    13  /
    <first_name>John</first_name><last_name>Doe</last_name><suffix>PhD</suffix><submission_name>Some
    Research Paper</submission_name><advisor_type_name>Department Co-Chair</advisor_type_name>
    <first_name>Jane</first_name><last_name>Doe</last_name><suffix>PhD</suffix><submission_name>Some
    Research Paper</submission_name><advisor_type_name>Department Co-Chair</advisor_type_name>
    <first_name>Johan</first_name><last_name>Smith</last_name><submission_name>Thesis on 17th Century
    Weather Patterns</submission_name><advisor_type_name>Professor</advisor_type_name>
    <first_name>Magnus</first_name><last_name>Jackson</last_name><suffix>MS</suffix><submission_name>The
    sis on 17th Century Weather Patterns</submission_name><advisor_type_name>Associate
    Professor</advisor_type_name>
    <first_name>Williard</first_name><last_name>Forsberg</last_name><suffix>AMS</suffix><submission_name
    Statistical Analysis on Sunny Days inMarch</submission_name><advisor_type_name>Scientist</advisor_type_name>
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- user_datastore that uses procedure:
    SCOTT@orcl_11gR2> BEGIN
      2    CTX_DDL.CREATE_PREFERENCE ('sa_datastore', 'USER_DATASTORE');
      3    CTX_DDL.SET_ATTRIBUTE ('sa_datastore', 'PROCEDURE', 'submission_advisors_proc');
      4  END;
      5  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- index (on optional extra column) that uses user_datastore and section group:
    SCOTT@orcl_11gR2> ALTER TABLE submission_advisors ADD (any_column VARCHAR2(1))
      2  /
    Table altered.
    SCOTT@orcl_11gR2> CREATE INDEX submission_advisors_idx
      2  ON submission_advisors (any_column)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  PARAMETERS
      5    ('DATASTORE     sa_datastore
      6        SECTION GROUP     CTXSYS.AUTO_SECTION_GROUP')
      7  /
    Index created.
    SCOTT@orcl_11gR2> -- what is tokenized, indexed, and searchable:
    SCOTT@orcl_11gR2> SELECT token_text FROM dr$submission_advisors_idx$i
      2  /
    TOKEN_TEXT
    17TH
    ADVISOR_TYPE_NAME
    AMS
    ANALYSIS
    ASSOCIATE
    CENTURY
    CHAIR
    CO
    DAYS
    DEPARTMENT
    DOE
    FIRST_NAME
    FORSBERG
    JACKSON
    JANE
    JOHAN
    JOHN
    LAST_NAME
    MAGNUS
    MARCH
    PAPER
    PATTERNS
    PHD
    PROFESSOR
    RESEARCH
    SCIENTIST
    SMITH
    STATISTICAL
    SUBMISSION_NAME
    SUFFIX
    SUNNY
    THESIS
    WEATHER
    WILLIARD
    34 rows selected.
    SCOTT@orcl_11gR2> -- sample searches across all data:
    SCOTT@orcl_11gR2> VARIABLE search_string VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search_string := 'professor'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> SELECT s.submission_name,
      2           a.advisor_type_name || ' ' ||
      3           sa.first_name || ' ' ||
      4           sa.last_name || ' ' ||
      5           sa.suffix AS advisor
      6  FROM   submission_advisors sa,
      7           submission s,
      8           advisor_type a
      9  WHERE  CONTAINS (sa.any_column, :search_string) > 0
    10  AND    sa.advisor_type_id = a.advisor_type_id
    11  AND    sa.submission_id = s.submission_id
    12  /
    SUBMISSION_NAME                               ADVISOR
    Thesis on 17th Century Weather Patterns       Professor Johan Smith
    Thesis on 17th Century Weather Patterns       Associate Professor Magnus Jackson MS
    2 rows selected.
    SCOTT@orcl_11gR2> EXEC :search_string := 'doe'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> /
    SUBMISSION_NAME                               ADVISOR
    Some Research Paper                           Department Co-Chair John Doe PhD
    Some Research Paper                           Department Co-Chair Jane Doe PhD
    2 rows selected.
    SCOTT@orcl_11gR2> EXEC :search_string := 'paper'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> /
    SUBMISSION_NAME                               ADVISOR
    Some Research Paper                           Department Co-Chair John Doe PhD
    Some Research Paper                           Department Co-Chair Jane Doe PhD
    2 rows selected.
    SCOTT@orcl_11gR2> -- sample searches within specific columns:
    SCOTT@orcl_11gR2> EXEC :search_string := 'chair'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> SELECT s.submission_name,
      2           a.advisor_type_name || ' ' ||
      3           sa.first_name || ' ' ||
      4           sa.last_name || ' ' ||
      5           sa.suffix AS advisor
      6  FROM   submission_advisors sa,
      7           submission s,
      8           advisor_type a
      9  WHERE  CONTAINS (sa.any_column, :search_string || ' WITHIN advisor_type_name') > 0
    10  AND    sa.advisor_type_id = a.advisor_type_id
    11  AND    sa.submission_id = s.submission_id
    12  /
    SUBMISSION_NAME                               ADVISOR
    Some Research Paper                           Department Co-Chair John Doe PhD
    Some Research Paper                           Department Co-Chair Jane Doe PhD
    2 rows selected.
    SCOTT@orcl_11gR2> EXEC :search_string := 'phd'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> SELECT s.submission_name,
      2           a.advisor_type_name || ' ' ||
      3           sa.first_name || ' ' ||
      4           sa.last_name || ' ' ||
      5           sa.suffix AS advisor
      6  FROM   submission_advisors sa,
      7           submission s,
      8           advisor_type a
      9  WHERE  CONTAINS (sa.any_column, :search_string || ' WITHIN suffix') > 0
    10  AND    sa.advisor_type_id = a.advisor_type_id
    11  AND    sa.submission_id = s.submission_id
    12  /
    SUBMISSION_NAME                               ADVISOR
    Some Research Paper                           Department Co-Chair John Doe PhD
    Some Research Paper                           Department Co-Chair Jane Doe PhD
    2 rows selected.
    SCOTT@orcl_11gR2> EXEC :search_string := 'weather'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> SELECT s.submission_name,
      2           a.advisor_type_name || ' ' ||
      3           sa.first_name || ' ' ||
      4           sa.last_name || ' ' ||
      5           sa.suffix AS advisor
      6  FROM   submission_advisors sa,
      7           submission s,
      8           advisor_type a
      9  WHERE  CONTAINS (sa.any_column, :search_string || ' WITHIN submission_name') > 0
    10  AND    sa.advisor_type_id = a.advisor_type_id
    11  AND    sa.submission_id = s.submission_id
    12  /
    SUBMISSION_NAME                               ADVISOR
    Thesis on 17th Century Weather Patterns       Professor Johan Smith
    Thesis on 17th Century Weather Patterns       Associate Professor Magnus Jackson MS
    2 rows selected.

  • Remote key for lookup tables

    Hi,
    I need some advice on remote keys for lookup tables.
    We have loaded lookup data from several client system into the MDM repository. Each of the client system can have diffferences in the lookup values. What we need to do is to enable the keymappings so that the syndicator would know which value belongs to which system.
    The tricky part is. We haven't managed to send out the values based on the remote keys. We do <b></b>not<b></b> want to send the lookup tables themselves but the actually main table records. All lookup data should be checked at the point of the syndication and only the used lookup values that orginally came from one system should be send to that particular system. Otherwise they should be tag should be blank.
    Is this the right approach to handle this demand or is there a different way to take care of this? What would be the right settings in the syndicator?
    Help will be rewarded.
    Thank you very much
    best regards
    Nicolas

    Hi Andreas,
    that is correct. Let's take two examples:
    1) regions
    2) Sales Area data (qualified lookup data)
    Both tables are filled and loaded directly from the R/3s. So you would already know which value belongs to which system.
    The problem that I have is that we will not map the remote key from the main table because it will be blank for new created master data (Centralization scenario). Therefore we cannot map the remote key from the attached lookup tables, can we?
    The remote key will only work for lookup tables if the remote key of the actual master data is mapped. Since we don't have the remote key (local customer ID form R/3) in MDM and since we do not create it at the point of the syndication... how would the SAP standard scenario would look like for that?
    This is nothing extraordinary it's just a standard centralization scneario.
    Please advice.
    Thanks alot
    best regards
    Nicolas

  • Limiting the values in a lookup table

    Hello everyone.
    I was wondering if it is possible to limit the selectable values in a lookup table based on certain criteria, foremost the content of a separate field.
    Example:
    A product has a measurement key that determines which sizes are valid for a given product.
    Is it possible for MDM to read this key and filter the values of the table holding all the values for all keys?
    Hope that was somewhat clear what I'm trying to do.
    Best regards,
    Anders

    Hello Andres:
    I believe that what you should do is to place all the products types into Categories. There, you can give different attributes to each product and therefore, limit what the user can choose as its values.
    For instance, you have two products A and B. Each one would have a separate "Measure" field.
    Create Categories:
    Cat_A
    Cat_B
    with separate attributes:
    Cat_A
      |____ Measure_A
    Cat_B
      |____ Measure_B
    And in each Measure <b>Attribute</b> (not field) you can specify the correct values for each category (i.e Product type)
    When the user chooses Cat_A as the product type, the Measure_A will appear, with its values. The same will happend for Cat_B
    I hope that helps
    Regards
    Alejandro

  • Proper use of a Lookup table and adaptations for NET

    Hello,
    I need to create a few lookup tables and I often see the following:
    create table Languages
    Id int identity not null primary key (Id),
    Code nvarchar (4) not null,
    Description nvarchar (120) not null,
    create table Posts
    Id int identity not null primary key (Id),
    LanguageId int not null,
    Title nvarchar (400) not null,
    insert into Languages (Id, Code, Description)
    values (1, "en", "English");
    This way I am localizing Posts with language id ...
    IMHO, this is not the best scheme for Languages table because in a Lookup table the PK should be meaningful, right?
    So instead I would use the following:
    create table Languages
    Code nvarchar (4) not null primary key (Code),
    Description nvarchar (120) not null,
    create table Posts
    Id int identity not null primary key (Id),
    LanguageCode nvarchar (4) not null,
    Title nvarchar (400) not null,
    insert into Languages (Code, Description)
    values ("en", "English");
    The NET applications usually use language code so this way I can get a Post in English without using a Join.
    And with this approach I am also maintaining the database data integrity ...
    This could be applied to Genders table with codes "M", "F", countries table, transaction types table (should I?), ...
    However I think it is common to use int as PK in lookup tables because it is easier to map to ENUMS.
    And know it is even possible to map to Flag Enums so have a Many to Many relationship in an ENUM.
    That helps in NET code but in fact has limitations. A Languages table could never be mapped to a FLags Enum ...
    ... An flags enum can't have more than 64 items (Int64) because the keys must be a power of two.
    A SOLUTION
    I decided to find an approach that enforces database data integrity and still makes possible to use enums so I tried:
    create table Languages
    Code nvarchar (4) not null primary key (Code),
    Key int not null,
    Description nvarchar (120) not null,
    create table Posts
    Id int identity not null primary key (Id),
    LanguageCode nvarchar (4) not null,
    Title nvarchar (400) not null,
    insert into Languages (Code, Key, Description)
    values ("en", 1, "English");
    With this approach I have a meaningfully Language code, I avoid joins and I can create an enum by parsing the Key:
    public enum LanguageEnum {
    [Code("en")
    English = 1
    I can even preserve the code in an attribute. Or I can switch the code and description ...
    What about Flag enums? Well, I will have not Flag enums but I can have List<LanguageEnum> ...
    And when using List<LanguageEnum> I do not have the limitation of 64 items ...
    To me all this makes sense but would I apply it to a Roles table, or a ProductsCategory table?
    In my opinion I would apply only to tables that will rarely change over time ... So:
        Languages, Countries, Genders, ... Any other example?
    About the following I am not sure (They are intrinsic to the application):
       PaymentsTypes, UserRoles
    And to these I wouldn't apply (They can be managed by a CMS):
       ProductsCategories, ProductsColors
    What do you think about my approach for Lookup tables?
    Thank You,
    Miguel

    >>IMHO, this is not the best scheme for Languages table because in a Lookup table the PK should be meaningful, right?<<
    Not necessarily. The choice to use, or not to use, a surrogate key in a table is a preference, not a rule. There are pros and cons to either method, but I tend to agree with you. When the values are set as programming terms, I usually use a textual value
    for the key. But this is nothing to get hung up over.
    Bear in mind however, that this:
        create table Languages
          Id int identity not
    null primary key
    (Id),     
          Code nvarchar (4)
    not null, Description nvarchar
    (120) not
    null,
    is not equivalent to
        create table Languages
          Code nvarchar (4)
    not null primary
    key (Code),     
          Description nvarchar (120)
    not null,
    The first table needs a UNIQUE constraint on Code to make these solutions semantically the same. The first table could have the value 'Klingon' in it 20 times while the second only once.
    >>However I think it is common to use int as PK in lookup tables because it is easier to map to ENUMS.<<
    This was going to be my next point. For that case, I would only change the first table to not have an identity assigned key value, as it would be easier to manage at the same time and manner as the enum.
    >>. A Languages table could never be mapped to a FLags Enum ...<<
    You could, but I would highly suggest to avoid any values encoded in a bitwise pattern in SQL as much as possible. Rule #1 (First Normal Form) is partially to have 1 value per column. It is how the optimizer thinks, and how it works best.
    My rule of thumb for lookup (or I prefer the term  "domain" tables, as really all tables are there to look up values :)), is all data should be self explanatory in the database, through data if at all possible. So if you have a color column,
    and it contains the color "Vermillion", and all you will ever need is the name, and you feel like it is good enough to manage in the UI, then great. But bear in mind, the beauty of a table that is there for domain purposes, is that you can then store
    the R, G, and B attributes of the vermillion color (254, 73, 2 respectively, based on
    http://www.colorcombos.com/colors/FE4902) and you can then use that in coding. Alternate names for the color could be introduce, etc. And if UserRoles are 1, 2, 3, and 42 (I have seen worse), then
    definitely add columns. I think you are basically on the right track.
    Louis
    Without good requirements, my advice is only guesses. Please don't hold it against me if my answer answers my interpretation of your questions.

  • How do I handle values in source that are not in "lookup" table?

    hi there,
    I have 3 tables (all Oracle technology):
    1) Source table: CALLS with columns MSISDN, TRANS_DATE, TYPE, COST, DURATION
    2) Lookup table: SUBSCRIBERS with columns SUBSCRIBERID, MSISDN, IMSI
    3) Target table: FACT_CALLS with columns SUBSCRIBERID (not null), CALLDATE, CALLTYPE, CALLDURATION, CALLCHARGE.
    Join between source and lookup table:
    NVL(CALLS.MSISDN,0) =SUBSCRIBERS.MSISDN)
    Mappings on target:
    FACT_CALLS.SUBSCRIBERID --> SUBSCRIBERS.SUBSCRIBERID
    FACT_CALLS.CALLDATE --> CALLS.TRANS_DATE
    FACT_CALLS.CALLTYPE --> CALLS.TYPE
    FACT_CALLS.CALLDURATION --> CALLS.DURATION
    FACT_CALLS.CALLCHARGE --> CALLS.CHARGE
    I have a dummy value in SUBSCRIBERS with values MSISDN = 0, SUBSCRIBERID = 0 and IMSI = 0, to be used if MSISDN in the source table is null or does not exist in the lookup table.
    The NVL on the join takes care of the case when source MSISDN is null and this is working fine i.e. returns 0 for SUBSCRIBERID.
    The problem occurs when the source MSISDN does have a value but such a value does not exist in the lookup table, such records are rejected.
    How do I implement a solution for this?

    hi Guru,
    Yes I have 2 source tables and a target.
    1) I created a join by dragging MSISDN on CALLS to MSISDN on SUBSCRIBERS then added the NVL part to have NVL(CALLS.MSISDN,0) =SUBSCRIBERS.MSISDN)
    2) the target does not have MSISDN. Using the join the target SUBSCRIBERID column gets populated with the correct value from the lookup table.
    i.e. FACT_CALLS.SUBSCRIBERID = (select SUBSCRIBERID from SUBSCRIBERS where SUBSCRIBERS.MSISDN = CALLS.MSISDN)

  • Tab key and lookup table

    Hi,
    How to use the 'Tab' key to open a lookup table, E.g. BP Code, Name?
    Alternatively, how to use the 'Tab' key to call a formatted search? I hope to follow the standard in SBO and do away the Shift-F2.
    I am using VB and 2004B. Your input will be appreciated.

    I think you can mimic every user action with UIAPI.
    Therefore, e.g. in Sales Order Form you activate the BPCode item (.Click) and hit Tab (.SendKeys(vbTab)).
    You could also try clicking the the iconitem ("67") next to BPCode.
    If you want to assign a Choose From List (CFL) to an item of your choice using UIAPI then you should wait for 2005.
    Shift-F2 is standard in SBO UI for launching FSs, why would you like to disable it?
    If you want to enable tabbing to kick-off an existing  FS, then you should capture the tabbing event (et_KEY_DOWN) in your item and then use SendKeys for sending Shift-F2 to keyboard.
    HTH
    Juha

  • LOOKUP TABLES

    should both the source and lookup tables be in the same schema???
    i choose the source table from one schema and the lookup tabel from another schema,so when i try to save it, i get an error saying
    *"In order to create a lookup ,you must be sure that both tables are on the same connection and the source technology has the ability to use a lookup"*.and i am not able to save the lookup.
    Edited by: 851305 on May 26, 2011 12:21 AM

    Hi,
    Please execute the lookup Join condition on staging area.
    Thanks

  • Qualified lookup table

    Hi Guys,
    Can you tell me the steps to be followed for importing and syndicating qualified lookup table .
    Thanks,
    MS Reddy

    Hi,
    1. Import all the lookup tables used in Qualified table.
    2. Import qualified table data along with main table data only. For more than one qualified table record you should have repeated main table record.
    E.g.
    Qualified Table - Price Information
    Lower Bound
    Purchasing Info Record ID
    Purchasing Organization
    Amount
    Currency
    First three fields are the non-qualifiers amongst which Purchasing Organization is the lookup field. So you need to first populate this field.
    Now your main table record is
    Supplier     Supplier Part Number     Short Description     Product ID     Service Item     Lower Bound Purchasing Info Record ID     Purchasing Organization     Currency     Amount
    AGENCY01          1000               metal processing     IBG-100               False     1     5300000018     Mumbai Purchase Org     Indian Rupee     100.00
    1. Map all the main table fields
    2. Identify the Display fields of your Qualified table, if your qualified table has more than one Display field then map the fields with the fields marked as D in destination structure e.g. in above given table all the three non-qualifiers are the display fields then mark Lower Bound with the D   Lower Bound field, similarly map the other display fields
    3. Select all the Display fields at source side, right click and select Create Compound field. This will combine your input fields and automatically map them with Qualified field.
    4. Map the qualifiers
    5. Select the unique fields in Matching fields tab and execute the import.
    When you syndicate data containing qualified links, it will create the number of output records depending on the number of qualified links.
    Regards,
    Jitesh Talreja

Maybe you are looking for

  • Submitting concurrent request from back end.

    Hi guys, I am submitting the concurrent request using fnd_request.submit_request ,passing start time as Sysdate. When i submit the program in scheduling section of concurrent program,the Requested start date defaults to current date but hours and min

  • How to partition on string value in 8i

    Hi all, I am on 8.1.7 and want ti use partitioning. I have a column for holding country_id (like US, GB, JP etc). I want to partition the table on the base of country ids. How can I do that? I know, in 9i I could have done list partitioning. Any expe

  • Calendar bookings are 1 hr later than we sent

    Hello,  We have an problem. Exchange 2010. Outlook 2010. I got below complain  from  users. Please help me on this.  Also some people are saying the times on their calendar bookings are 1 hr later than I sent out and also what's booked on my calendar

  • Wrong computer name

    help!!! i accidentally typed in the wrong name i wanted to put in for my mac. so it is called drafterkif's macbook not drafterkid's macbook. how do you change it????

  • No CD drive!

    So I recently took my G72 laptop in for some repairs - as some devices such as a 3g internet device and my ipod (would install) but would then not show up as being connected to the computer. I was having driver issues, since having to install a new h