Sort Sequence for mutli dimension table in SAP Lumira

Hi Experts,
I notice one interesting point in SAP Lumira for multi dimension
For example, we have a table like below
Country  City   GSV
A1           B        100
               C        150
               D        130
A2           E        200
A3           F        100
               G        150
if you turn on the subtotal for GSV in Lumira table, result will be like below
Country  City   GSV
A1           B        100
               C        150
               D        130
               Total   380
A2           E        200
               Total   200
A3           F        100
               G        150
               Total   250
Sort by GSV descending, result will like this
Country  City   GSV
A1           C        150
               D        130
               B        100
               Total   380
A2           E        200
               Total   200
A3           G        150
               F        100
               Total   250
Here the sort sequence for City under each country is correct
But the sequence for country is incorrect and I think the correct sequence for country will be
Country  City   GSV
A1           C        150
               D        130
               B        100
               Total   380
A3           G        150
                F        100
               Total   250
A3           E        200
               Total   200
It looks like this issue is not only for Lumira and other company tools behaviors same as well
Would you kindly let me know whether whether for sorting, the sort sequence should consider from 1st level to lowest level
Best regards
Alex yang

Hi,
I think you've got a typo in the last block.
with regards to next steps - if you feel you have identified a defect, please log a support ticket via the normal routes.
if this is considered an enhancement, please submit a request at the Ideas Place for Lumira.
Many thanks,
H

Similar Messages

  • Sort Sequence for Multiple TO

    Hi,
    I need to configure the sort sequence for mutliple TOs with different materials (Fixed bin picking). I hope bin coordinate needs to be configured for this. How to assign site to the warehouse number? or please explain the configuration steps for sort sequence?
    Appreciate your help on this.
    Thanks,
    Raja

    Hi Raja,
    In SPRO Logistics Execution --> Warehouse Mgt --> Activities --> Define Print Control
    Define the Sorting  Fields under button Sort Profile/coll proc for the corresponding Warehousee number.
    This is the thing responsible for sorting transfer orders.
    Along with this you will also have to Define
    1. Spool Code
    2. Print Code
    3. Printer Lables
    Than make assignments to Warehouse Number Strg Type & rest all as per your requirement
    Njoy....!!!
    Hiren Panchal

  • One sequence for multiple lookup tables?

    I am sorry if this is off-topic, however can anybody advise what is better - use one sequence for all tables (lookup tables) or to create unique sequence for each table?
    Thanks
    DanielD

    Daniel,
    After you read this (http://asktom.oracle.com/pls/ask/f?p=4950:8:::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:2985886242221,), you may be sorry you asked.
    Scott

  • ADF FACES: how to preserve the sort criteria for an af:table

    How can I preserve the sort criteria on an af:table across page invocations? I've searched all through the forum and I don't see anything on this topic.
    I simply want the sort criteria (from when the user clicks on a column header) to be remembered across multiple uses of the page. I know that the control handles this itself for multiple invocations of the same page (like when you page through the table). But I need to preserve the sort order so I can install it again when someone leaves the page and then returns to it.
    I've tried various attempts using a SortListener to record the sort criteria, but I can't figure out how to reinstall the criteria without generating exceptions from the table control.
    Any pointers on how to do this would be greatly appreciated.
    Thanks.
    Larry.

    Ok, I've solved the problems with the odd behavior by always creating a new model when the table data changes and copying the sort criteria into the new model, like this:
            // Construct our own CollectionModel from this result set
            if(_model == null) {
                // Construct the initial data model and set the starting sort criteria
                ListDataModel m = new ListDataModel(results);
                _model = new SortableModel(m);
                // Set the sort criteria to last name
                ArrayList criteria = new ArrayList();
                criteria.add(new SortCriterion("lastName", true));
                _model.setSortCriteria(criteria);
            } else {
                // Construct a new model so the table "sees" the change
                ListDataModel m = new ListDataModel(results);
                SortableModel sm = new SortableModel(m);
                sm.setSortCriteria(_model.getSortCriteria());
                _model = sm;
            }But, I end up with one final thing that doesn't work. In the "then" clause above, I try to set the initial sort criteria for the table - it has no effect. When the table is rendered, it is not sorted in any way.
    How can I specify an initial sort order for the table? Why is it ignoring the sort criteria on the model?
    Thanks.

  • Loading the different result sets in the same sequence for the target table

    Dear all,
    I have 5 tables say A,B,C,D as my source and i made 3 joins P,Q,R .the result sets of these 3 joins are loading into a target table X but with 3 different targets with same table name.
    I created one sequence say Y as my target table has primary key and mapped to three different targets for the same target table which i need to load.
    But after deployed and executed successfully ,i am able to load the data from three join result sets with differeent sequence numbers.
    I am looking to load data like this.
    If First Result set P has 10 Records,SEcond Result Set Q Has 20 and the third result set has 30 records then while loading data into first target it creates the seq for the 10 records from 1..10 and while loading the data for second result set ,it creates the sequence from 11 ...20 and while loading the third target with the third result set it creates the sequence from 21 ----30.
    But i am looking to load the three result sets in the sequence 1to 10 but not like creating fresh sequence for each result set.
    how can we achieve this in owb?
    any solution for this will be appreciated.
    thank you
    kumar

    My design is like following
    SRC1
    ---->Join1--------------------------->Target1( Table X)<-----Seq1
    SRC2
    SRC3
    ----> Join2----------->Target2(Table X)<----Seq1
    SRC4
    -----> Join3 -------> Target3(Table X)<-----Seq1
    SRC5
    Here the three 3 targets are for the same Table X as well sequence is same i.e seq1
    If the First Join has 10 rows ,Seq1 generates sequence in 1 to 10 while loading target1
    But while loading second target,Same Seq1 is generating new sequence from 11 but i am looking to load target2 and target 3 starting from sequence 1 but not from 11 or so.
    As per your comments :
    you want to load 3 sources to one target with same sequence numbers?
    yes
    Are you doing match from the other two sources on first source by id provided by sequence (since this is the primary key of the table)?
    No
    can you please tell me how to approach for this?
    Thank You
    Kumar

  • Sort Sequence for Product Catalogue ISA B2B

    Hi there
    I'm trying to find another way of sorting items within a catalogue area. As it is in the standard system , the products are simply sorted according to the Product number (acceding).
    The requirement is to be able to rearrange the sequence according to alternate means such as product description. So that when it comes to browsing the product catalogue in the web shop the specific sort sequence is reflected.
    we are using CRM4.0 E-Selling B2B
    thanks in advance

    Thanks for the responce,
    No, we use Both Manual and Automatic too.
    In this program can you re-import or is that only used for first time uploads ?
    secondly does the sequence of products, within a catalogue area remain as it is in your spread sheet after the upload, or does it revert back to the accending oprder of the product numbers ?
    thanks

  • AFS Sort - Sequence  (table J_3ASORT)

    We are in the process of upgrading from SAP 4.5B (AFS 2.5B) to ECC 5 (AFS 5).  Table J_3ASORT has been marked as 'obsolete' by SAP and no new/substitute table is mentioned (in the SAP documentation nor notes - that I could find) that will contain data that I can use in its place.
    J_3ASORT is used in many of our customer applications to control the sequence of our sizes / dimensions as they are presented on the screen or report.
    If I let the screen display in the sequence based on the dimension then we will get (for example):
    Dimension value Sort sequence
    1X              00001350    
    28              00000100    
    2X              00001400    
    30              00000200    
    32              00000300    
    34              00000400    
    36              00000500    
    38              00000600    
    3X              00001500    
    40              00000700    
    42              00000800    
    44              00000900    
    46              00001000    
    48              00001100    
    4X              00001600    
    50              00001200    
    52              00001300   
    The dimensions are not in the correct 'size' sequence, but rather they are in the ASCII sort sequence.
    When I use the sort sequence, I get the better readable version of:
    Dimension value Sort sequence
    28              00000100    
    30              00000200    
    32              00000300    
    34              00000400    
    36              00000500    
    38              00000600    
    40              00000700    
    42              00000800    
    44              00000900    
    46              00001000    
    48              00001100    
    50              00001200    
    52              00001300    
    1X              00001350    
    2X              00001400    
    3X              00001500    
    4X              00001600   
    Short of creating a new Z table does somebody know of a table that can be used to the same effect of being able to assign a sort sequence to the various dimensions?
    I very much appreciate your time and effort.
    Henry

    Henry,
             Did you try out this note - Note 856352 - AFS 5.0 upgrade: Wrong values order for grid characteristics. Now in AFS 5.0 the characteristics maintenance is thro ct04. The dimension sort sequence for the relevant grids are now maintained in the following table - J_3APGDI . I guess you will have to populate this table using your dimension and grid relationships.
    I hope this piece of information was useful to you.
    Regards,
    Gary

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Creating sequences for all tables in the database at a time

    Hi ,
    I need to create sequences for all the tables in my database.
    i can create individually ,using toad and sqlplus.
    Can any one give me a code for creating the sequences dynamically at a time for all the tables.
    it is urgent ..
    Regards.

    I need to create sequences for majority of the tables that are having ID column
    which is sequences."The majority" is not the same as all. So you probably want to drive your generation script off the ALL_TAB_COLUMNS view...
    where column_name = 'ID'You need to think about this carefully. You might want different CACHE sizes or different INCREMENT BY clauses for certain tables. You might even (whisper it) want a sequence to be shared by more than one table.
    Code generation is a useful technique, but it is a rare application where one case fits all.
    Cheers, APC
    Blog : http://radiofreetooting.blogspot.com/

  • Incremental load into the Dimension table

    Hi,
    I have the problem in doing the incremental load of the dimension table.Before loading into the dimension table,i would like to check the data in the dimnesion table.
    In my dimension table i have one not null surrogate key and the other null dimension tables.The not null surrogate key, i am populating with the Sequence Generator.
    To do the incremental load i have done the following.
    I made lookup into the dimension table and looked for a key.The key from the lookup table i have passed to the expression operator.In the expression operator i have created one field and hard coded one flag based on the key from the lookup table.I passed this flag to the filter operator and rest of the fields from the source.
    By doing this i am not able to pass the new records to the dimension table.
    Can you please help me.
    I have another question also.
    How do i update one not null key in the fact table.
    Thanks
    Vinay

    Hi Mark,
    Thanks for your help to solve my problem.I thought i share more information by giving the sql.
    I am giving below the 2 sqls, i would like to achieve through OWB.
    Both the following tasks need to be accomplished after loading the fact table.
    task1:
    UPDATE fact_table c
    SET c.dimension_table_key =
    (SELECT nvl(dimension_table.dimension_table_key,0)
    FROM src_dimension_table t,
    dimension_table dimension_table
    WHERE c.ssn = t.ssn(+)
    AND c.date_src_key = to_number(t.date_src(+), '99999999')
    AND c.time_src_key = to_number(substr(t.time_src(+), 1, 4), '99999999')
    AND c.wk_src = to_number(concat(t.wk_src_year(+), concat(t.wk_src_month(+), t.wk_src_day(+))), '99999999')
    AND nvl(t.field1, 'Y') = nvl(dimension_table.field1, 'Y')
    AND nvl(t.field2, 'Y') = nvl(dimension_table.field2, 'Y')
    AND nvl(t.field3, 'Y') = nvl(dimension_table.field3, 'Y')
    AND nvl(t.field4, 'Y') = nvl(dimension_table.field4, 'Y')
    AND nvl(t.field5, 'Y') = nvl(dimension_table.field5, 'Y')
    AND nvl(t.field6, 'Y') = nvl(dimension_table.field6, 'Y')
    AND nvl(t.field7, 'Y') = nvl(dimension_table.field7, 'Y')
    AND nvl(t.field8, 'Y') = nvl(dimension_table.field8, 'Y')
    AND nvl(t.field9, 'Y') = nvl(dimension_table.field9, 'Y')
    WHERE c.dimension_table_key = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    source table for the dimension table is src_dimension_table
    dimension_table_key is a not null key in the fact table
    task2:
    update fact_table cf
    set cf.key_1 =
    (select nvl(max(p.key_1),0) from dimension_table p
         where p.field1 = cf.field1
    and p.source='YY')
    where cf.key_1 = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    key_1 is a not null key in the fact table
    Is it possible to achieve the above tasks through Oracle Warehouse builder(OWB).I created the mappings for loading the dimension table and fact table and they are working fine.But the above two queries i am not able to achieve through OWB.I would be thankful if you can help me out.
    Thanks
    Vinay

  • How to read sort sequence and other properties of a specification in Cg02 .

    hello ,
    i need to update the specification properties in CG02 basing on sort sequence  for hazards and mariane pollutant of a specification.
    do we have any standard Bapi or Fm to read the properties of Specification .
    thanks for your help
    Ananta

    well -
    specifications are rather complicated - there are many tables linked by internal keys. As most of them (and all in the folowing overview) begin with 'EST' I only use the subseqent letters - so table ESTRH is written as RH below.
    Here an -  uncomplete - overview
    Header: RH
    2nd level:
    - RR References
    - RI Identifiers
    - MJ Material Joins
    - VH Valuation Headers (containing field ESTCAT for valuation type)
    3rd level:
    - VA - Valuation (instance - containing sequence number (field ORD - non unique!)
    4th level - data of the valuation:
    - PR Data stored in class system (such as regulation, DG-Class)
    - DU Usage
    - VP Components
    - 0F Transport Classification (containing Hazard Inducers)
    In general you can determine the related table if you look at the technical info at CG02. The structure name will be like
    "RCGxxIOT" where xx stands for the last two letters of the table ESTxx. The structure of interfaces in FUGR C1F5 are defined with a similar pattern: ESPRH_APIxx_TAB_TYPE.
    To read the data, you may use FM C1F5_SPECIFICATIONS_READ  which has sufficient documentation. You may refer to the documentation of obsolete FM C1F2_SUBSTANCES_READ as well.
    A good alternative is  BAPI_BUS1077_GETDETAIL - as this is a BAPI Function it should work after an upgrade while FM of C1F5 migh make some adaption necessary.
    However - it will take some days to understand both - the data model of specifications and the related FMs. I personally would recommend a developer familar with this 1yr+ before asking for implementations that are not only reading data - but they are hard to find....
    have fun!
    hp

  • Dimension table sharing

    Hi, Gurus,
         If two(or more) cubes have some same dimensions, how are they sharing the dimension tables like SAP says? I think they can't because they have different dimension table names even they have the same infoobjects in it.
         Am I right? this is a performance issue.

    Hi Kunlun,
    In my mind I don't know cubes can share dimension tables.
    For example, even if you have two cubes with the same structure, they will have own dimension tables and fact tables.
    When you first time activate a cube named CUBENAME, the system will automatically generate the following tables:
    Dimension table:
    /BIC/DCUBENAMEXX
    Fact table:
    /BIC/FCUBENAME
    So for different cubes, the dimension tables' names are different.
    For the performance aspect, sharing dimension will save disk space but will not benefit the query performance. You can image if system shares dimension for one big cube and one small cube, the query execution time on the small cube will be longer.
    Regards,
    Frank

  • Should my dimension table be this big

    Im in the process of building my first product dimension for a star schema and not sure if im doing this correctly. A brief explanation of our setup
    We have products (dresses) made by designers for specific collections each of which has a range of colors and each color can come in many sizes. For our UK market this equates to some 1.9 million
    product variants. Flattening the structure out to Product+designer+collection gives about 33,000 records but when you add all the colors and then all the colors sizes it pumps that figure up to 1.9million. My “rolled own” incremental ETL load runs
    ok just now, but we are expanding into the US market and our projections indicate that our products variants could multiple 10 fold. Im a bit worried about performance of the ETL just now and in the future.
    Is 1.9m records not an awful lot of records to incrementally load (well analyse) for a dimension table nightly as it is never mind what that figure may grow to when we go to US?
    I thought of somehow reducing this by using a snowflake but would this not just reduce the number of columns in the dimensions and not the row count?
    I then thought of separating the colors and size into their own dimension, but this doesn’t seem right as they are attributes of products and also I would lose the relationship between products,  size & color I.e. I would have to go through the
    fact table (which ive read is not a good choice.) for any analysis.
    Am I correct in thinking these are big numbers for a dimension table? Is it even possible to reduce the number somhow?
    Still learning so welcome any help.
    Thanks

    Hi Plip71,
    In my opinion, It is always good to reduce the Dimension Volume as much as possible for better performance.
    Is there any Hierarchy in you product Dimension?.. Going for a snowflake for this problem is a bad idea..
    Solution 1:
    From the details given by, It is good to Split the Colour and Size as seperate dimension. This will reduce the vloume of dimension and increase the column count in the fact(seperate WID has to be maintained in the fact table). but, this will improve the
    performance of the cube. before doint this please check the layout requirement from the business.
    Solution 2:
    Check the distinct count of Item varient used in fact table. If it is very less, they try creating a linear product dimension. i.e, create an view for the product dimension doing a inner join with the fact table. so that only the used Dimension member will
    be loaded in the Cube Product Dimension. hence volume is reduced with improvement in performance and stability of the cube.
    Thanks in advance, Sorry for the delayed reply ;)
    Anand
    Please vote as helpful or mark as answer, if it helps Regards, Anand

  • "Select" Physical table as LTS for a Fact table

    Hi,
    I am very new to OBIEE, still in the learning phase.
    Scenario 1:
    I have a "Select" Physical table which is joined (inner join) to a Fact table in the Physical layer. I have other dimensions joined to this fact table.
    In BMM, I created a logical table for the fact table with 2 Logical Table Sources (the fact table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the fact table and the select table, I don't see any data for the select table column.
    Scenario 2:
    In this scenario, I created an inner join between "Select" physical table and a Dimension table instead of the Fact table.
    In BMM, I created a logical table for the dimension table with 2 Logical Table Sources (the dimension table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the dimension table and the select table, I see data for all the columns.
    What am I missing here? Why is it not working in first scenario?
    Any help is greatly appreciated.
    Thanks,
    SP

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

Maybe you are looking for

  • Error in creation of Object Type from XML passed

    Hi, I am facing a problem creating a appropriate a object type for a XML. Below are the details: XML Passed <mer_offer_action_data> <form_id> 134039588 </form_id> <action_cd> OA </action_cd> <offer_decline_reason_cd> </offer_decline_reason_cd> <start

  • ITunes 10.5.1 will no longer connect wirelessly to ios devices.

    I did not have a problem connecting wirelessly when I had 10.5 but after upgrading to 10.5.1 the only way for itunes to see my ios devices is to connect via usb. I am running windows 7 and have tried rebooting with no luck. I do have sync with this i

  • Report painter: Carry Forward (KSLVT not available)

    Hi, I am building a report based on table GLPCT (using report painter), in a column I need to have the carry forward and the records of the current year . Aim is to have, for a group of accounts, the total from the begining in SAP to today: Carry for

  • PI Mapping problem - Repeating target nodes

    Hello PI buddies! My PI mapping problem is as follows: I have a source IDoc which I wish to map to a JDBC Message Type. In the Idoc I have 2 repeating sections that I wish to combine to create a new node in the target for each combination of the 2 re

  • CFMAIL not writing to spool

    I am working on CFMX7 Enterprise Edition. The CFMAIL was working perfectly fine until sometime ago. Now, the CFMAIL doesn't work at all. Ran all customary checks and found that the .cfmail files are not written in the spool folder at all. When i phys