How to aggregate data?

Hi,
I found an interface can only map columns to columns. If I want to aggregate the source before loading to target, how to do it?
Thanks.

Hi Cheney,
You can add your aggregate function in your target column mapping.
for example :-
in trg_col1 mapping ->
Max(src_col1).
If you are including any aggregate function in target column mapping, your KM will generate a query with a group by clause .
Regards,
Rathish AM

Similar Messages

  • How to aggregate data in SQL Query

    Hi,
    I have Table1 field1 and field2. Combination of these fields form the key of this table.
    Next I have Table2 with field3 and field4. field1 is the unique key for this table.
    My query is:
    select T2.field4||','||T1.field2 from T1 inner join T2 on T1.field1 = T2.field3;
    In the result I want to aggregate the data by T2.field4
    How do I that? Please help
    Thanks in advance,
    Raja

    How to aggregate data in SQL Query By using aggregate functions and group by:
    SQL> select object_type, count(*), sum(decode(status,'VALID',0,1)) inv_obs
      2  from all_objects
      3  group by object_type;
    OBJECT_TYPE                     COUNT(*)              INV_OBS
    CONSUMER GROUP                         2                    0
    INDEX PARTITION                      970                    0
    TABLE SUBPARTITION                    14                    0
    SEQUENCE                             226                    0
    SCHEDULE                               1                    0
    TABLE PARTITION                      349                    0
    PROCEDURE                             21                    0
    OPERATOR                              57                    0
    WINDOW                                 2                    0
    PACKAGE                              313                    0
    PACKAGE BODY                          13                    0
    LIBRARY                               12                    0
    PROGRAM                                9                    0
    INDEX SUBPARTITION                   406                    0
    LOB                                    1                    0
    JAVA RESOURCE                        771                    0
    XML SCHEMA                            24                    0
    JOB CLASS                              1                    0
    TRIGGER                                1                    0
    TABLE                               2880                    0
    INDEX                               4102                    0
    SYNONYM                            20755                  140
    VIEW                                3807                   72
    FUNCTION                             226                    0
    WINDOW GROUP                           1                    0
    JAVA CLASS                         16393                    0
    INDEXTYPE                             10                    0
    CLUSTER                               10                    0
    TYPE                                1246                    0
    EVALUATION CONTEXT                     1                    0

  • How to aggregate data with BI Publisher using an OBIEE analysis

    Hello,
    I'm new with BI Publisher, and I have a concern about the ability for BIP to aggregate data.
    I am creating a data source from an OBIEE analysis containing the columns "Year", "Month", and "Revenue".
    I want to use this source for 1 report containing several pages. One of the page has a simple table displaying only the Year and the Revenue.
    But I get as many rows as number of months is the year.
    And I cannot find any way to have the data aggregate inside my PIB table.
    Can someone help me finding the solution?
    Many thanks in advance

    Hi,
    Unfortunalty BIP doesn't aggregate anything, like the BI server does. It will always show data on the lowest level.
    If you use a query on the BI-server let the BI server do the aggregation and just remove the "Month" column. If you don't want to remove it from your OBI analysis, copy the logical SQL from this analysis to BIP. And select OBIEE as a datasource.
    Then remove month from your query.
    Regards, Machiel

  • How to aggregate data in SNP aggregated?

    Dear Expert,
    Now, i want aggregate demand of products( A123,A124 and A A224) for location K410 from two location: 6610 and 6710.
    I have created a loction hierachy with root is K410 and two leaves : 6610 and 6710.
    Now how can i aggregate demand of A123, A124 and A224 in K410 from 6610 and 6710  ?
    thanks

    Hello,
    If the hierarchy master data is correctly created, activated and assigned to the correct model, you can try aggregated planning in standard SNP aggregated planning book 9ASNPAGGR/SNPAGGR(1). Just load the data, and use 'Location Aggregation' function button.
    If you're new to SNP aggregated planning, please review the below online documents for more detailed information. It is very important that you have the correct master data settings and planning book settings.
    http://help.sap.com/saphelp_scm70/helpdata/EN/2c/c557e9e330cc46b8e440fb3999ca51/frameset.htm
    Best Regards,
    Ada

  • How to refine data in Aggregate tables  in Oracle BI

    Hello!
    How to refine data in aggregates tables (created by "create aggregates ..." statement) after the data in corresponding database tables was updated?
    It is unable to use the "delete aggregates" statement and then "create aggregates" again, because "delete aggregates" eliminates all aggregates in a model, but there are several aggregates in our model, that are intended for refining in different time moments and periods.
    Thanks

    903830 wrote:
    Hi folks,
    suppose we have table emp01, have 10 records & we create another table emp02 as
    create table emp02 as select * from emp01;
    now both the table have identical data.
    How to find "data" in two tables are identical?Why wouldn't they be identical? You've just created one table as a copy of the other. They are identical in data terms.

  • How to aggregate a column based date column (for weekly single row)?

    How to aggregate a column based date column (for weekly single row)?

    Hi,
    Consider the below statement for daily bases which is ok
    SELECT ID, DATE, SUM(AMOUNT) FROM TABLE_NAME GROUP BY ID, DATE ORDER BY ID, DATE
    The same like the above statement, I want output on weekly, fortnightly, and monthly bases, How to do this? Need your help...

  • Use of semantic groups to aggregate data

    In a number of threads - e.g. Semantic Groups in DTP it is stated that you cannot use semantic groups to aggregate data. In others, there are statements that this is only for handling the error stack.
    This I find puzzling, as the SAP Help says:
    +Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.
    This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.+
    The experience of one of my clients is that you can indeed use Semantic Groups, and you do not need to define an error-DTP. ( Change the error handling on the Update tab to, e.g. "1 Valid Records Update, No Reporting" - then the key fields displayed after perssing the semantic group button become available ).
    Any comments? Have I misunderstood the point that the others have been making?
    matt

    Hi Matt,
    Semantic group is used to define the groupiing of records by data package.  If you select 0PLANT as a Semantic group, each packet of data will contain all values of 0PLANT until the Max record is reach.  Example, Package Size is set to 50K.  If you have 30K of 0PLANT = 0002 and 22K of 0PLANT = 0003.  First packet will contain 52K records of plant 0002 & 0003.  The next packet will start with plant 0004 and will contain all records with 0004.  If this doesn't make 50K records then plant 0005 will be included in packet 2. 
    Regards,
    Dae Jin

  • How to pull data from oracle GL to planning

    Hi all
    Can one suggest me how to pull data from oracle GL in to hyperion planning(we are using hyperion planning 9.3 with limited licence for analytic services). i guess we can use HAL ..any thing else
    any help is appreciable
    regards
    M.V

    Hi
    I appreciate for your response. Yep you are exactly right it is data only, but I do have a problem these people have their licence of limited version of Essbase.
    Let me give you the information of their licence agreement.
    The licence for the hyperion planning- sytem9 programs includes a limited use license for both the hyperion Essbase- system 9 and hyperion Financial reporting - system 9 programs. Such limited use license means that the hyperion Essbase- system 9 and Hyperion financial reporting -system 9 programs may only be used to access data from the Hyperion planning -system 9 program.
    Specifically, the hyperion essbase- system 9 program cannot be used to create Essbase cubes that do not contain data used by the Hyperion planning- system 9 program and the Aggregate storage option component of the Hyperion Essbase - system 9 program may not be used.
    1) what it(licence agreement) really means - can you install Essbase, if not how can you create dimensions dynamically using planning.
    2) Is it possible without Essbase as far as I know its not possible
    3) are we going to face any issues with this licence.
    4) Is it possibe to pull data with planning
    I appreciate if any one resolve this issue
    regards
    M.V

  • Aggregate data

    Hi All
    I loaded data into Aggregate. Now i want to see the data in that aggregate.How to see the data.
    thanks in advance.
    reddy

    Hi,
    1. Right click on the cube -> select <i>Maintain Aggregates</i> -> in the screen that follows, select your aggregate -> Goto -> Aggregate Data (Shift+F9).
    2. Execute Tcode LISTCUBE -> Enter 'CUBE' and aggregate name.
    Regards,
    Vikrant.

  • Not sure if this is the right place...but is there any aggregate data repository/dictionary out there?

    And what I mean is...Is there any aggregate data dictionary out there that tells you how long and what type a data column should be?
    For example...Medical Provider Name...what is the normal length for something like this? Provider ID? Tax ID? Address Line 1? City? State?...Etc...
    Is like EDI X12 considered the Bible of Data Dictionaries? Yet you have to pay for that don't you? Is there anything else I can reference???
    Thanks for your review and am hopeful for a reply.
    PSULionRP

    Provider ID is well defined:
    "National Provider Identifier Standard (NPI)
    The National Provider Identifier (NPI) is a Health Insurance Portability and Accountability Act (HIPAA) Administrative Simplification Standard. The NPI is a unique identification number for covered health care providers. Covered health care providers and
    all health plans and health care clearinghouses must use the NPIs in the administrative and financial transactions adopted under HIPAA. The NPI is a 10-position, intelligence-free numeric identifier (10-digit number). This means that the numbers do not carry
    other information about healthcare providers, such as the state in which they live or their medical specialty. The NPI must be used in lieu of legacy provider identifiers in the HIPAA standards transactions."
    LINK:
    http://www.cms.gov/Regulations-and-Guidance/HIPAA-Administrative-Simplification/NationalProvIdentStand/index.html?redirect=/NationalProvIdentStand/
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Aggregates Data

    Hi,
      I have  loaded the data few days back and i have roll up the data to aggregates.Now i need to delete the data from cube and need to load the fresh data.Whetheri need to delete my aggregates data.If so how i can proceed.please advice.
    Regards
    Arunkumar

    This can be done in 2 ways.
    1. Directly delete the request from the manage tab. This will automatically delete the data from aggregates as well
    2. Selective Deletion: This will also automatically delete the data from aggregates.
    So you just need to delete the data from the infocube and automatically the data will be deleted from the aggregates.

  • How to populate data in table control  .

    hi all,
    i put matnr no. in screen no. 103
    validation is done at that screen only.
    now when i want to modify dat record
    when i put matnr no. at screen 103
    so how i will get all  data of dat number to table control screen.

    Hi Darshan,
       Here is a detailed description of how to update data in table controll.
      Updating data in table control
    The ABAP language provides two mechanisms for loading the table control with data from the internal table and then storing the altered rows of the table control back to the internal table.
    Method 1: Read the internal table into the Table Control in the screenu2019s flow logic.  Used when the names of the Table Control fields are based on fields of the internal table.
    Method 2: Read the internal table into the Table Control in the module pool code. Used when the names of the Table Control fields are based on fields of the database table.
    Method 1 (table control fields = itab fields)
    In the flow logic we can read an internal table using the LOOP statement. Define the reference to the relevant able control by specifying WITH CONTROL <ctrl>
    Determine which table entry is to be read by specifying CURSOR <ctrl>-CURRENT_LINE.
    After the read operation the field contents are placed in the header line of the internal table. If the fields in the table control have the same name as the internal they will be filled automatically. Otherwise we need to write a module to transfer the internal table fields to the screen fields.
    We must reflect any changes the user makes to the fields of the table control in the internal table otherwise they will not appear when the screen is redisplayed after PBO processing, (eg, after the user presses Enter or scrolls) However, this processing should be performed only if changes have actually been made to the screen fields of the table control (hence the use of the ON REQUEST)
    PROCESS BEFORE OUTPUT.
    LOOP AT ITAB_REG WITH CONTROL TCREG
    CURSOR TCREG-CURRENT_LINE.
    ENDLOOP.
    PROCESS AFTER INPUT.
    LOOP AT ITAB_REG.
    MODULE MODIFY_ITAB_REG.
    ENDLOOP.
    MODULE MODIFY_ITAB_REG INPUT.
    MODIFY ITAB_REG INDEX TCREG-CURRENT_LINE.
    ENDMODULE.
    Method 2 (table control fields = dict. fields)
    If using a LOOP statement without an internal table in the flow logic, we must read the data in a PBO module which is called each time the loop is processed.
    Since, in this case, the system cannot determine the number of internal table entries itself, we must use the EXIT FROM STEP-LOOP statement to ensure that no blank lines are displayed in the table control if there are no more corresponding entries in the internal table.
    PROCESS BEFORE OUTPUT.
    LOOP WITH CONTROL TCREG.
    MODULE READ_ITAB_REG.
    ENDLOOP.
    PROCESS AFTER INPUT.
    LOOP WITH CONTROL TCREG.
    CHAIN.
    FIELD: ITAB_REG-REG,
    ITAB_REG-DESC.
    MODULE MODIFY_ITAB_REG
    ON CHAIN-REQUEST.
    ENDCHAIN.
    ENDLOOP.
    MODULE READ_ITAB_REG OUTPUT.
    READ TABLE ITAB_REG INDEX TCREG-CURRENT_LINE.
    IF SY-SUBRC EQ 0.
    MOVE-CORRESPONDING ITAB_REREG TO TCREG.
    ELSE.
    EXIT FROM STEP-LOOP.
    ENDIF.
    ENDMODULE.
    MODULE MODIFY_ITAB_REG INPUT.
    MOVE-CORRESPONDING TCREG TO ITAB_REG.
    MODIFY ITAB_REG INDEX
    TCREG-CURRENT_LINE.
    ENDMODULE.
    Updating the internal table
    Method 1
    PROCESS AFTER INPUT.
    LOOP AT ITAB_REG.
    CHAIN.
    FIELD: ITAB_REG-REG,
    ITAB_REG-DESC.
    MODULE MODIFY_ITAB_REG ON CHAIN-REQUEST.
    ENDCHAIN.
    ENDLOOP.
    MODULE MODIFY_ITAB_REG INPUT.
    ITAB_REG-MARK = u2018Xu2019.
    MODIFY ITAB_REG INDEX TCREG-CURRENT_LINE.
    ENDMODULE.
    Method 2
    PROCESS AFTER INPUT.
    LOOP WITH CONTROL TCREG.
    CHAIN.
    FIELD: TCREG-REG,
    TCREG-DESC.
    MODULE MODIFY_ITAB_REG ON CHAIN-REQUEST.
    ENDCHAIN.
    ENDLOOP.
    MODULE MODIFY_ITAB_REG INPUT.
    MOVE-CORRESPONDING TCREG TO ITAB_REG.
    ITAB_REG-MARK = u2018Xu2019.
    MODIFY ITAB_REG INDEX TCREG-CURRENT_LINE.
    ENDMODULE.
    Updating the database
    MODULE USER_COMMAND_100.
    CASE OK_CODE.
    WHEN u2018SAVEu2019.
    LOOP AT ITAB-REG.
    CHECK ITAB_REG-MARK = u2018Xu2019.
    MOVE-CORRESPONDING ITAB_REG TO TCREG.
    UPDATE TCREG.
    ENDLOOP.
    WHEN u2026
    u2026
    ENDCASE.
    ENDMODULE.
    Hope this will solve your problem.
    Regards,
    Pavan.
    Edited by: PAVAN CHANDRASEKHAR GANTI on Aug 3, 2009 12:48 PM

  • How to populate data in PAY_PEOPLE_GROUPS table (People Group Flexfiled)

    Hello
    We are migrating the data from one oracle instance to another oracle instance which are in same version of Oralce Applications 11.5.10.2. As a part of migration can anybody let me know how to populate data in "People Group Key Flexfiled" (PAY_PEOPLE_GROUPS table), ideally I will create or update employee records from the source instance to destination instance, so while creating or updating the employee records in can pass people_group_id while calling to the assignment api but my question here is before passing group id to the api i should have the data populated in PAY_PEOPLE_GROUPS TABLE so that i can fetch the group id as per the combination and pass it in to the api.. please suggest...

    Thanks for your information! by any chance do you have any sample code which will create/update assignments with People Group Flexfield; when i check "hr_assignment_api.update_emp_asg_criteria" it only has parameter to pass people group id and not having segments parameters to pass individual segments.
    Also let me know the links if you have any for all HR API guide which will help me to develope the interfaces...
    My requirement is we have two instances in which in one instance we are treating as source for HR which will be used to master for all HR related activities and we are planning to develope an interface which will bring master instance in sync with dummy instance.

  • Open Hub: How-to doc "How to Extract data with Open Hub to a Logical File"

    Hi all,
    We are using open hub to download transaction files from infocubes to application server, and would like to have filename which is dynamic based period and year, i.e. period and year of the transaction data to be downloaded. 
    I understand we could use logical file for this purpose.  However we are not sure how to have the period and year to be dynamically derived in filename.
    I have read in sdn a number of posted messages on a similar topic and many have suggested a 'How-to' paper titled "How to Extract data with Open Hub to a Logical Filename".  However i could not seem to be able to get document from the link given. 
    Just wonder if anyone has the correct or latest link to the document, or would appreciate if you could share the document with all in sdn if you have a copy.
    Many thanks and best regards,
    Victoria

    Hi,
    After creating open hub press F1 in Application server file name text box from the help window there u Click on Maintain 'Client independent file names and file paths'  then u will be taken to the Implementation guide screen > click on Cross client maintanance of file name > create a logical file path by clicking on new entiries > after creating logical file path now go to Logical file name definition there give your Logical file , name , physical file (ur file name followed by month or year what ever is applicable (press f1 for more info)) , data format (ASC) , application area (BW) and logical path (choose from F4 selection which u have created first), now goto Assignment of  physical path to logical path > give syntax group >physical path is the path u gave at logical file name definition.
    however we have created a logical path file name to identify the file by sys date but ur requirement seems to be of dynamic date of tranaction data...may u can achieve this by creating a variable. U can see the help from F1 that would be of much help to u. All the above steps i have explained will help u create a dynamic logical file.
    hope this helps u to some extent.
    Regards

  • How to set data in rtf document?

    Hi friends,
    I have a rtf document can anyone suggest how to set data in cells of an rtf document?Is there any way?
    Thanks in advance..
    Regards ,
    Soumyanil

    Convert the resultSet from the db to a Object[][], let's call it result.
    Then create a JTable (jTable1).
    On the JTable you need to define the headers, and the data itself.
    You can get the headers from ResultSetMetaData. Convert these to an array again (headers).
    Now use these methods to create a model and set the model of the JTable.
    Model model =  new DefaultTableModel(result, headers);
    jTable1.setModel(model);That's about the basics.
    If you need more info, use at the tutorial at sun's homepage.
    How to use Tables:
    http://java.sun.com/docs/books/tutorial/uiswing/components/table.html

Maybe you are looking for