Answers - Columns and aggregation of measures

Can I bring in data at a line item (Order_ID) level in Answers do some calculations around a metric returned, then be able to hide/remove this column in a table and have the metrics be aggregated up to the other level - say if I bought in Region? I have some bucketing to do around an Order metric but don't want to present this granularity in the table.
Thanks

Yvan.
In the end I settled for aggregating tax year data (for fifteen years) as this seems to be the main reason for being retrospective in my accounting. I had to reference this manually in the ledger.
I then used formula similar to this in the crosstab:
=SUMIFS(Register :: $gbpe,Register :: $Field,"=Income",Register :: $Type,"=Salary",Register :: $Tax,$A11) where $A11 refers to the crosstab year row in question.
I see a problem with the computing load these formulae have created, but am pleased to be able to do the analysis, as I was reliant on pivot tables in Excel and wondered how I could cope in Numbers.
Thanks for your help

Similar Messages

  • Cube operator and aggregation of measures on existing records

    Hi there,
    I'm using a cube operator with loading type LOAD in order to perform merge on fact table.
    We have following situation
    Record already exists on fact table for customer and product with qty 8.
    Incoming record has qty 2, so tried using cube operator hoping that because aggragtion specfied on qty measure was sum and set to solve cube, new qty would be 10.
    However, looked at sql generated and record simply gets updated with new qty rather than adding new to existing qty.
    I can achieve our aim by simply reading for any existing record and adding new record qty to existing record qty, but was hoping the cube operator would do thi s for me.
    Anybody achivede anything similar using sinmply cube operator.
    Many Thanks

    Do you mean that you want to load the data into the AW using the AVG function instead of SUM? If this is true, are you planning to use AVG as the aggregation operator in the AW as well? Will this give the answer you want? The code currently defaults to SUM for load even if you aggregate the cube using AVG since AVG of AVG is not usually what people want. If you want to do it anyway then there it is possible if you hand edit the XML to add an attribute named AggregationMethod to the CubeMap. E.g.
        <CubeMap
          Name="MAP1"
          Query="SALES_FACT"
          AggregationMethod="AVG">But the simpler way to do it is to define a SQL View that aggregates to the load level using AVG and then map the cube to the view.

  • Report with non aggregated and aggregated columns from different facts.

    Hi,
    We have got requirement as follows,
    1) We have two dimension tables, and two fact(Fact1 and Fact2) table in physical.
    2) In BMM we have made hierarchies for both dimensions, and are joins both logical fact table.
    3)In fact1, we are having three measures of which we have made two as aggregation sum, and one is non aggregated(It contains character).
    4)Fact2 have two measures, both are aggregation as sum.
    5)Now here the problem arises, we want to make a report with some columns from dim and non aggrgated column from fact1 and and aggregated column fact2
    How to resolve the above issue.
    Regards,
    Ankit

    As suggested you really want to move your none-aggregated fact attributes to a logical dimension (using the same physical table as the logical fact). Map this in the BMM layer as a snowflake, Place a hierarchy on this dimension with (at minimum) Total -> Detail levels, then on the other fact table you want to include in the report, set the content level on your other fact measures to the 'Total' level for your new logical Dim and it will allow them to be present in the same report.

  • How to identify the source column and source table for a measure

    Does anyone have a query that I can use to positively identify the source column and source table for a cube measure in an SSAS cube?  Visual Studio shows ID, Name, and Source, but it is nearly worthless in a large cube and database.
    Also - the same for a dimension would be great.
    If no query exists for this, can someone please explain how to find the source column/table for a measure and for a dimension?
    Thanks.

    DMVs don’t expose the DataSourceView content. AMO is much better suited for object model operations like
    this than the DMVs. PowerShell is also sometimes an option, but in this case C# code would be much easier because analyzing the contents of the DataSourceView is much easier using the .Net DataSet class.
    Hope this helps.
    Reeves
    Denver, CO

  • OEE Report: KPI formulas and aggregation rules?

    Hello,
    is there documentation about the formulas and aggregation rules used in the OEE Report? Customers will only trust the KPIs in this report if they can verify them.
    Formulas & Aggregation
    How are the KPIs calculated?
    How are the KPIs aggregated, if a work center or resource type is selected instead of a single resource?
    Data & Variables
    Which data is used? Where do the variables used in the formulas come from? E.g. which tables and columns or which SQL SELECT statements are used?
    There is just very little information about the OEE Report in the online help or in the "SAP ME for Discrete Industries" document. And the MII "OEE Report Action", where obviously all the magic happens, is a black box.
    Best Regards,
    Martin

    Hi Daniele,
    Is Saichand correct in assuming that you have SUM setup in the BMM layer? If so, follow Saichand's recommendation.
    If you are specifying the aggregation in the Answer Request itself, then try this.
    Let's say this is your situation:
    SELECT DIMENSION, MEASURE_1
    FROM CRITERIA1
    UNION ALL
    SELECT DIMENSION, MEASURE_2
    FROM CRITERIA2
    UNION ALL
    SELECT DIMENSION, MEASURE_3
    FROM CRITERIA3
    UNION ALL
    SELECT DIMENSION, MEASURE_4
    FROM CRITERIA4
    Change it to this:
    SELECT 'CRITERIA1' CRITERIA, DIMENSION, MEASURE_1 MEASURE
    FROM CRITERIA1
    UNION ALL
    SELECT 'CRITERIA2' CRITERIA, DIMENSION, MEASURE_2 MEASURE
    FROM CRITERIA2
    UNION ALL
    SELECT 'CRITERIA3' CRITERIA, DIMENSION, MEASURE_3 MEASURE
    FROM CRITERIA3
    UNION ALL
    SELECT 'CRITERIA4' CRITERIA, DIMENSION, MEASURE_4 MEASURE
    FROM CRITERIA4
    Then in the combined criteria, add two new measures:
    new_measnure1 = SUM(CASE WHEN CRITERIA in ('CRITIERA1','CRITIERA1') then MEASURE END
    new_measnure2 = AVG(CASE WHEN CRITERIA in ('CRITIERA3','CRITIERA4') then MEASURE END
    NOTE: The syntax for the new_measure* may need to reference the saw naming convention, ie. saw0 = CRITIERIA, saw1 = DIMENSION, saw2 = MEASURE.
    That should work out for you.
    Good luck and let me know if this helps.
    -Joe

  • How to put the proper header at each column in write to measurement file (.lvm) ?

    Hi,
           i would like to know one thing about the write to measurement file. Can i put the proper header at each column in write to measurement file (.lvm) ? and how can i do for it ? Could you show me a way to make it ?
    i am looking forward your kindly reply.
    thank you so much ..
    best regard,
    roxy

    Hi Mike,
                  Thank you very much for your help. It works .
    By the way, i would like to ask you about setting time interval. In the pic, X_Value column is for time column
    i would like to ask that when i use the write to measurement file, there got the x column that show time inside the excel file. In this column, i would like to set the time interval at every 210ms .The data may come in every msec but i just want to record the data at every 210ms .  for example, 210ms-420ms-630ms ..etc. how can i set it ?
    One more thing .. can i do it in graph as well ?
    i am looking forward your kindly reply.
    Thank you for your time.
    Best Regard,
    Roxy

  • Want to Change the Column Indetn Color of Measure Label in a Pivot View

    Hello,
    I have used the measure labels in the "ROWS" of a pivot table and there is a column header or indent on the top these measure label which is BLUE by default .My requirement is to change that to a different color and if possible put some text there.
    (to be simple i see a .PTindent cell ,on the top of measure label which i want to edit from dashboard or reporting end without making any changes to the CSS entry files,becuase this entry can cause effect to all the table already in the environment)
    Thanks
    Raghu
    Edited by: user13136841 on Nov 22, 2010 11:56 AM

    Let's try this step by step:
    1) In your report add a Static Text view.
    2) Add the formula I described:
    <style>.PTIndentCell {background-color: #000000;}</style> 3) Check the "Contains HTML Markup" checkbox.
    View the Compound Layout or in the Dashboard (viewing the Pivot Table will NOT show the change.)
    If you can't see anything than the next thing to check is if you can view HTML. Go to a CHAR column and click on the Data Format tab. What choices do you see if you attempt to override the format? If you see HTML, then okay. If you don't, but only see "Plain Text," than HTML is not enabled. The default for OBIEE is to not allow HTML. You will need to change your instanceconfig.xml file and set the HardenXSS to false. See the link below:
    http://download.oracle.com/docs/cd/E12096_01/books/AnyWebAdm/AnyWebAdm_CustAnyWeb23.html

  • Lock columns and rows

    Hi I am trying to create a form/spreadsheet for shift bosses at work. I want them to be able to imput certain data but I also want to lockout certain columns and rows. Mostly so the formulas aren't tampered with. Is this possible?
    Also, is it possible to save to a database?
    Any help would be much appreciated.

    dogg,
    Create separate Input and Calculation tables. Lock the calculation table (Arrange > Lock), leave the input table accessible. If you don't want the calculations to be viewed during input, you can change the colors of all lines, fills and text to White before locking, and put them on a different Sheet for good measure.
    Another technique that plays on Numbers graphics advantage is to cover areas of your sheet that you wish to protect with a shape. Create a Shape and place it to cover your sensitive equations. Then set the Shape's Fill Color to 0% Opacity and lock the shape. Now you have what I call the salad bar sneeze guard effect for spreadsheets. You can defeat it if you try hard, but it works most of the time.
    Jerry
    Message was edited by: Jerrold Green1

  • The key columns of the country measure group attribute do not match in either number or data types to the key columns of the source attribute

    I have a country dimension which is used in 5 cubes. Now I want to change 'key column' property of the country attribute. Whenever I do this, I get an error as "The key columns of the country measure group attribute do not match in either number or
    data types to the key columns of the source attribute". I dont understand what this error is about.
    Can someone please help? Thanks in advance.
    -Regards,
    Raj Patil

    sounds like you need to verify your dimensional usage tab to verify the relationships between dim and fact on the measure group. 
    Hi Talktorajpatil,
    As Jon said, you can verify the relationships between dim and fact on the measure group on the dimensional usage tab. Use this section to define how you "join" your measure groups to your dimensions.  There may be a Dimension-Measure Group relationship
    that is defined using the wrong attributes and you'll need to select the correct attributes to link the Dimensions to the measure groups. Here are some similar thread for your reference.
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/74203b66-8a71-4681-8e47-8f99cce87b3d/error-on-the-measure-group-which-do-not-match-the-data-type-of-the-key-column?forum=sqlanalysisservices
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/2421058d-fd4a-44b8-8c7c-b0b349bbef2d/measure-group-attribute-key-column-does-not-match-source-attribute?forum=sqlanalysisservices
    Hope this helps.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Need query to list columns and data

    Hi all,
    I am having a requirement that to compare two identical rows based on empno and list the columns and data which columns are varied.
    SQL> select * from emp1 where empno=7369;
         EMPNO ENAME      JOB              MGR HIREDATE           SAL       COMM     DEPTNO
          7369 SMITH      CLERK           7902 17-DEC-1980       2000                    20
          7369 SMITH      PRO             7788 17-DEC-1980       2100                    20if we see above data - columns mgr,sal and job have different data but same having same employee number
    Now i want to display the columns which are having different values for a given perticular employee having identical rows with employee number
    Note : in my original table i have 300 columns around. above explained is sample example for understanding
    thanks
    krishna.

    Hi,
    If you want a variable number of columns, depending on the data found, then you'll have to use dynamic SQL.
    I suggest you don't do that. I recommend String Aggregation , that is, concatenating the variable number of items into one big string column, formatted so that ir looks like differenct columns. That is, you might get output like this:
    EMPNO MISMATCHED_DATA
          JOB       MGR       SAL
    7369 CLERK     7902      2000
    7369 PRO       7788      2100Notice that the output consists of 3 rows and 2 columns.
    The 1st row displayed serves as a header. (The actual header has the actual, generic column name, mismatched_data.)
    Here's one way to get output like that:
    VARIABLE  data_width     NUMBER
    EXEC      :data_width := 10;
    WITH     unpivoted_data          AS
         SELECT       e.empno
         ,       DENSE_RANK () OVER ( PARTITION BY  e.empno
                                      ORDER BY          e.ROWID
                             )                    AS r_num
         ,       c.column_name
         ,       c.column_id
         ,       RPAD ( COALESCE ( CASE  c.column_name
                                        WHEN  'ENAME'       THEN  e.ename
                                    WHEN  'JOB'       THEN     e.job
                                    WHEN  'MGR'       THEN  TO_CHAR (e.mgr)
                                  WHEN  'HIREDATE'       THEN  TO_CHAR (e.hiredate, 'DD/MM/YYYY HH:MI:SS AM')
                                -- ... more columns in your real problem
                            END
                          , CASE  c.column_name
                                WHEN  'SAL'       THEN     TO_CHAR (e.sal)
                                WHEN  'COMM'         THEN  TO_CHAR (e.comm)
                                WHEN  'DEPTNO'    THEN  TO_CHAR (e.deptno)
                            END
                   , :data_width       -- maximum column width
                   )                         AS d
         FROM    emp1               e
         JOIN       user_tab_columns     c  ON     c.table_name     = 'EMP1'
         WHERE     e.empno     IN (7369)        -- any 1 or more empnos
    ,     got_val_cnt     AS
         SELECT     u.*
         ,     COUNT (DISTINCT d) OVER ( PARTITION BY  empno
                                        ,              column_name
                             )      AS val_cnt
         FROM    unpivoted_data     u
    ,     discrepancies     AS
         SELECT  v.*
         FROM     got_val_cnt     v
         WHERE     val_cnt     > 1
    ,     relevant_columns     AS
         SELECT DISTINCT  column_name
         ,           DENSE_RANK () OVER ( ORDER BY  column_id)     AS c_num
         FROM     discrepancies
    SELECT       d.empno
    ,       REPLACE ( SYS_CONNECT_BY_PATH ( NVL ( d
                                         , RPAD (' ', :data_width)
                             , '~'
                  , '~'
                ) AS mismatched_data
    FROM            discrepancies          d     PARTITION BY ( d.empno
                                              , d.r_num
    RIGHT OUTER JOIN  relevant_columns     r     ON         r.column_name = d.column_name
    WHERE     CONNECT_BY_ISLEAF     = 1
    START WITH     r.c_num          = 1
    CONNECT BY     r.c_num          = PRIOR r.c_num + 1
         AND     d.empno          = PRIOR d.empno
         AND     d.r_num          = PRIOR d.r_num
        UNION ALL
    SELECT    NULL          AS empno
    ,       REPLACE ( SYS_CONNECT_BY_PATH ( RPAD (column_name, :data_width)
                                   , '~'
                , '~'
                )     AS mismatched_data
    FROM      relevant_columns
    WHERE       CONNECT_BY_ISLEAF     = 1
    START WITH     c_num          = 1
    CONNECT BY     c_num          = PRIOR c_num + 1
    ORDER BY  empno          NULLS FIRST
    ;You can specify any number of empnos to be included in the report.
    You really want a CASE expression that has a WHEN clause for every column in your table, but CASE expressions have a maximum of (I believe) 128 WHEN clauses. If you really have 300 rows, you'll have to break that down into smaller groups. In the query above, I used 2 CASE expressions in a COALESCE function, where each of the CASE expressions had no more than 4 WHEN clauses. You may have to use 3 CASE expressions, each with 100 WHEN clauses.
    This solution does not assume there are exactly 2 rows per empno; there can be any number.
    If all the rows for an empno are completely identical, that empno will not be included in the output. This includes the situation where a given empno is unique.
    Again, you can get separate columns for each mismatched item, using dynamic SQL, but it's even more convoluted than the query above.

  • ROWS to COLUMNS and COLUMNS to ROWS (Earlier Docs/Threads Didn't help)

    Hi Guys,
    I have gone through the earlier threads and documents on this. But didn't help.
    In lots of cases they have used DECODE statement. However, it works if you know the number of columns and rows...what if we are not sure of the number of records? How does it work?
    Please direct me to any thread or any other documents.
    -Sandeep

    Pivoting in SQL
    a. 10g Model Clause
    SQL> create table test(id varchar2(2), des varchar2(4), t number);
    Table created
    SQL> INSERT INTO test values(’A',’a1',12);
    1 row inserted
    SQL> INSERT INTO test values(’A',’a2',3);
    1 row inserted
    SQL> INSERT INTO test values(’A',’a3',1);
    1 row inserted
    SQL> INSERT INTO test values(’B',’a1',10);
    1 row inserted
    SQL> INSERT INTO test values(’B',’a2',23);
    1 row inserted
    SQL> INSERT INTO test values(’C',’a3',45);
    1 row inserted
    SQL> commit;
    Commit complete
    SQL> SELECT * FROM test;
    ID DES T
    A a1 12
    A a2 3
    A a3 1
    B a1 10
    B a2 23
    C a3 45
    6 rows selected
    SQL> select distinct i, A1, A2, A3
    2 from test c
    3 model
    4 ignore nav
    5 dimension by(c.id i,c.des d)
    6 measures(c.t t, 0 A1, 0 A2, 0 A3)
    7 rules(
    8 A1[any,any] = t[cv(i),d = ‘a1'],
    9 A2[any,any] = t[cv(i),d = ‘a2'],
    10 A3[any,any] = t[cv(i),d = ‘a3']
    11 );
    I A1 A2 A3
    C 0 0 45
    B 10 23 0
    A 12 3 1
    SQL> select distinct d, A, B, C
    2 from test c
    3 model
    4 ignore nav
    5 dimension by(c.id i,c.des d)
    6 measures(c.t t, 0 A, 0 B, 0 C)
    7 rules(
    8 A[any,any] = t[i = ‘A’, cv(d)],
    9 B[any,any] = t[i = ‘B’, cv(d)],
    10 C[any,any] = t[i = ‘C’, cv(d)]
    11 );
    D A B C
    a1 12 10 0
    a3 1 0 45
    a2 3 23 0
    b. Pivoting INSERT example;
    INSERT ALL
    INTO sales_info VALUES (employee_id,week_id,sales_MON)
    INTO sales_info VALUES (employee_id,week_id,sales_TUE)
    INTO sales_info VALUES (employee_id,week_id,sales_WED)
    INTO sales_info VALUES (employee_id,week_id,sales_THUR)
    INTO sales_info VALUES (employee_id,week_id, sales_FRI)
    SELECT EMPLOYEE_ID, week_id, sales_MON, sales_TUE,sales_WED, sales_THUR,sales_FRI
    FROM sales_source_data;
    c. 11g sql pivot keyword
    http://oraclebizint.wordpress.com/2007/09/05/oracle-11g-pivot-and-unpivot/
    d. reference for others :)
    http://laurentschneider.blogspot.com/2005/08/pivot-table.html

  • Export file - fixed columns and remove dimensions

    Hello Experts
    I wan't to use the standard export package ang get dimensions fixed in specific columns and also remove some dimensions.
    The problem is that i always get the dimensions randomly in columns and when i am able to remove dimensions, the dimensions are removed randomly, please see *MAPPING and result below, does anyone know how to do this? Or have an example? I have used the standard example files but they have not helped....
    *OPTIONS
    FORMAT = DELIMITED
    HEADER = YES
    DELIMITER=
    VALIDATERECORDS=NO
    ROUNDAMOUNT = 7
    OUTPUTHEADER=
    OUTPUTDELIMITER=
    SPECIFICMAPPING=YES
    *MAPPING
    ENTITY=*COL(1)
    TIME=*COL(2)
    ACCOUNT=*COL(3)
    RPTCURRENCY=*COL(4)
    AMOUNT=*COL(5)
    ACCOUNT,ENTITY,RPTCURRENCY,TIME,AMOUNT
    NON_FLOW,ADT5_E,ACTUAL,ANA_TONS,TOTALADJ
    NON_FLOW,568U_E,ACTUAL,ANA_TONS,TOTALADJ
    Best regards
    Jonas

    Given the nature of OLAP and FACT tables, I do not beleive that you are able to disassociate a dimension from the export process.  So, I don't think that you may choose the dimensions to export, plus there method of being written to a file may just be alphabetical. I would export the complete details and then manipulate the details during an import process. The only other alternative that I can think of is to write a custom SSIS SQL script, to allow for FACT member aggregation if you choose to remove a dimension.
    But I would need to test further. Hope this helps.

  • How can we get Dynamic columns and data with RTF Templates in BI Publisher

    How can we get Dynamic columns and data with RTf Templates.
    My requirement is :
    create table xxinv_item_pei_taginfo(item_id number,
    Organization_id number,
    item varchar2(4000),
    record_type varchar2(4000),
    record_value CLOB,
    State varchar2(4000));
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'USES','fever','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'HOW TO USE','one tablet daily','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'SIDE EFFECTS','XYZ','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'DRUG INTERACTION','ABC','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'OVERDOSE','Go and see doctor','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'NOTES','Take after meal','TX');
    select * from xxinv_item_pei_taginfo;
    Item id Org Id Item Record_type Record_value State
    493991     224     1265-D30     USES     fever     TX
    493991     224     1265-D30     HOW TO USE     one tablet daily     TX
    493991     224     1265-D30     SIDE EFFECTS     XYZ     TX
    493991     224     1265-D30     DRUG INTERACTION     ABC     TX
    493991     224     1265-D30     OVERDOSE      Go and see doctor     TX
    493991     224     1265-D30     NOTES     Take after meal     TX
    Above is my data
    I have to fetch the record_type from a lookup where I can have any of the record type, sometime USES, HOW TO USE, SIDE EFFECTS and sometimes some other set of record types
    In my report I have to get these record typpes as field name dynamically whichever is available in that lookup and record values against them.
    its a BI Publisher report.
    please suggest

    if you have data in db then you can create xml with needed structure
    and so you can create bip report
    do you have errors or .... ?

  • Oracle BPEL - Does not cater for not null columns and use of "default".

    Oracle BPEL - Does not cater for not null columns and use of "default".
    BPEL fails with message:
    ORA-01400: cannot insert NULL into ("EDDB"."SEISMIC_LINES"."COORD_SYSTEM_ID")
    But SQL*PLUS command works:
    INSERT into EDDB.SEISMIC_LINES
    (etc)
    regards
    Allan Ford
    Analyst / Programmer - IT Application Services, IT Services, Shared Business Services
    Santos Ltd
    Level 4, 91 King William Street, Adelaide SA 5000
    Phone: 08 8224 7944 Fax: 08 8218 5320
    Email: [email protected]

    note: BPEL keeps it's own "offline" copy of table and database items. A column that is marked not null in the database can be marked as nullable in this area. (if you kmow that a trigger is going to cater for this ..)
    One workaround is to use a trigger to provide value rather than use the column default ..

  • [Forum FAQ] SharePoint 2013: Extracting values from a multi-value enabled lookup column and merge values to a multi-value enabled column

    For some business requirements, users want to extract values from a multi-value enabled lookup column
    and add items to another list based on each separate value. In contrast, others want to find duplicate values in the list and merge associated values to a multi-value enabled column and then
    add items to another list based on the merged value. All of these can be achieved using SharePoint Designer 2013 Workflow.
    How to extract values from a multi-value enabled lookup column and add items to another list based
    on each separate value using SharePoint Designer 2013.
    Important actions: Loop Shape; Utility Actions
    Three scenarios
    Things to note
    Steps to create Workflow
    How to merge values to a multi-value enabled column and add item to another list based on the
    merged value using SharePoint Designer 2013.
    Important actions: Call HTTP Web Service; Build Dictionary
    Things to note
    Steps to create Workflow
    How to
    extract values from a multi-value enabled lookup column and
    add items to another list based on each separate value using SharePoint Designer 2013.
    For example, they have three lists as below. They want to
    extract values from the Destinations column
    in Lookup2 and add items to Lookup3 based on each country and set Title to current item: ID.
    Lookup1:
    Title (Single line of text)
    Lookup2:
    Title (Single line of text), Destinations (Lookup; Get information from: Lookup1 in Title column).\
    Lookup3:
    Title (Single line of text), Country (Single line of text).
    Important action
    1. Loop Shape: SharePoint Designer 2013 support two types of loops: loop n times and loop with condition.
    Loops must also conform to the following rules:
    Loops must be within a stage, and stages cannot be within a loop.
    Steps may be within a loop.
    Loops may have only one entry and one exit point.
    2. Utility Actions: It contains many actions, such as ‘Extract Substring from Index of String’ and ‘Find substring in String’.
    Three scenarios
    We need to loop through the string returned from the look up column and look for commas. There are three
    scenarios:
    1.  No comma but string is non-empty so there is only one country.
    2.  At least one comma so there is at least two or more countries to loop.
    3.  In the loop we have consumed all the commas so we have found the last country. 
    Things to note
    There are two things to note:
    1. "Find string in string (output to Variable:index)"  will return -1 if doesn't find
    the searched for string.
    2. In the opening statement "Set Variable: Countries to Current Item:Destinations" set the return
    field as  "Lookup Values, Comma Delimited".
    Steps to create Workflow
    Create a custom list named Lookup1.
    Create a custom list named Lookup2, add column: Destinations (Lookup; Get information from: Lookup1 in Title column).
    Create a custom list named Lookup3, add column: Country (Single line of text).
    Create a workflow associated to Lookup2.
    Add conditions and actions:
    Start the workflow automatically when an item is created.
    Add item to Lookup2, then workflow will be started automatically and create multiple items to lookup3.
    See the below in workflow History List:
    How to merge values to a multi-value enabled column and add item to another list based on the
    merged value using SharePoint Designer 2013
    For example, they have three lists as below. They want to find duplicate values in the Title column in
    Lookup3 and merge country column to a multi-value enabled column and then add item to lookup2 and set the Title to Current Item: Title.
    Lookup1:
    Title (Single line of text)
    Lookup3:
    Title (Single line of text), Country (Single line of text).
    Lookup2:
    Title (Single line of text), Test (Single line of text).
    Important actions
    "Call HTTP Web Service"
    action: In SharePoint 2013 workflows, we can call a web service using a new action introduced in SharePoint 2013 named Call HTTP Web Service. This action
    is flexible and allows you to make simple calls to a web service easily, or, if needed, you can create more complex calls using HTTP verbs as well as allowing you to add HTTP headers.
    “Build Dictionary"
    action:
    The Dictionary variable type is a new variable type in the SharePoint 2013 Workflow.
    The following are the three actions specifically designed for the Dictionary variable type: Build Dictionary, Count Items in a Dictionary and Get an Item from a Dictionary.
    The "Call HTTP Web Service" workflow action would be useless without the new "Dictionary" workflow action.
    Things to note
    The
    HTTP URI is set to https://sitename/_api/web/lists/GetByTitle('listname')/items?$orderby=Id%20desc and the HTTP method is set to “GET”. Then the list will be sort by Id in descending order.
    Use Get
    d/results(0)/Id form
    Variable: ResponseContent (Output to
    Variable: maxid) to get the Max ID.
    Use Set
    Variable: minid to Current List:ID to get the Min ID.
    Use Copy from
    Variable: destianation , starting at
    1 (Output to
    Variable: destianation) to remove the space.
    Steps to create Workflow
    Create a custom list named Lookup1.
    Create a custom list named Lookup2, add column: Test (Single line of text).
    Create a custom list named Lookup3, add column: Country (Single line of text).
    Create a workflow associated to Lookup3.
    Add a new "Build Dictionary" action
    to define the http request header:
    Add a Call HTTP Web Serviceaction, click on
    this and paste your http request.
    To associate the
    RequestHeader variable, select the Call action property,
    set the
    RequestHeaders property to
    RequestHeader:
    In the Call action, click on
    response and associate the response to a new
    variable: ResponseContent (of type Dictionary).
    After the Call action add Get item from Dictionary action to get the Max ID.
    Add Set Workflow Variable action to get the Min ID.
    Add Loop Shape (Loop with Condition) to get all the duplicate titles and integrate them to a string.
    Create item in Lookup2.
    The final Stage should look like this:
    Start the workflow automatically when an item is created.
    Add item to Lookup3, then workflow will be started automatically and create item to lookup2.
    See the below in workflow History List:
    References
    SharePoint Designer 2013 - Extracting values from a multi-value enabled lookup column into a dictionary as separate items:
    http://social.technet.microsoft.com/Forums/en-US/97d34468-1b53-4741-88b0-958472f8ca9a/sharepoint-designer-2013-extracting-values-from-a-multivalue-enabled-lookup-column-into-a
    Workflow actions quick reference (SharePoint 2013 Workflow platform):
    http://msdn.microsoft.com/en-us/library/jj164026.aspx
    Understanding Dictionary actions in SharePoint Designer 2013:
    http://msdn.microsoft.com/en-us/library/office/jj554504.aspx
    Working with Web Services in SharePoint 2013 Workflows using SharePoint Designer 2013:
    http://msdn.microsoft.com/en-us/library/office/dn567558.aspx
    Calling the SharePoint 2013 Rest API from a SharePoint Designer Workflow:
    http://sergeluca.wordpress.com/2013/04/09/calling-the-sharepoint-2013-rest-api-from-a-sharepoint-designer-workflow/

    GREAT info, but it may be helpful to note that when replacing a portion of the variable "Countries" with a whitespace character, you may cause the workflow to fail in a few specific cases (certain lookup fields will not accept this and will automatically
    cancel).  I only found this out when recreating your workflow on a similar, but much more complex list set.  
    To resolve this issue, I used another utility action (Extract Substring from Index of List) to clear out the whitespace.  I configured it as "Copy from
    Variable: Countries, starting at
    1 (Output to Variable: Countries), which takes care of this issue in those few cases.
    Otherwise, WOW!  AWESOME JOB!  Thanks!  :)

Maybe you are looking for

  • Opening multiple databases in a single file

         Hello everybody, I need create 1000 small databases(tables) in a single physical file, how to solve? ( I have read chap 3 Access Method Operations in BDB programmer reference,  so I know it's need create a shared database environment, but I don'

  • Vendor consignment warehouse

    Hi All, I wonder if you can give me some tips on how to handle the following requirement. My company wants to put in place a consignment warehouse for one vendor. This vendor is now delivering components directly to 5 different plants (1 SAP & 4 lega

  • NiScope EX Multi Record

    Hi: I have a question on niScope EX Multi Record. I tried to put 10ohm in the BNC cable. See my attached graph. Then, I press the "Bark" button on my test fixture. It will generate the waveform using the niScopeEX Multi Record vi. But, if I use 100oh

  • Workflow for OWB. Abort inactive process

    Hello to All! I have some problem with stopping of inactive processflow. I usually use procedure WORKFLOW.WF_ENGINE.ABORTPROCESS and sometime i have bad results of working it procedure. Please help me kill inactive processflow (any guide, idea?). Tha

  • Muse vs Wordpress for Real Estate site?

    I need to make a fairly complex site for a real estate developer who has several subdivisions and offers extensive services.  I would love to use Muse for the design flexibility, but I can't find how I can make a searchable database of properties tha