Aggregator Transformation in ODI

Hi,
How do i implement aggragation using ODI interfaces...
transform the data accoridng to aggregator functions like
MIN
MAX
AVG
COUNT
FIRST
LAST
MEDIAN
PERCENTILE
STDDEV
SUM
VARIANCE
where do i specify the columns to be included in the group by function....
for example in informatica we have aggreagator tranformation wherein we can specify the columns to be cinsodered in group by...
Thanks

Hi Ace2,
When i use the aggregator function in the expression editor of the target column....i can see the group by function in the SQL in operator...
but group by has all the other columns...
say i have 10columns in my table....when i applicaed sum(coln1) then it is generationg the4 SQL as
select sun(coln1)
from table
gorup by coln2 thru coln10
i wqould like to specify only some of the columns in the group by clause....
Thanks
Is there any option to attach the screenshots here in this discussion forum.
Edited by: user4315565 on Mar 22, 2010 3:51 PM

Similar Messages

  • What are the different type of transformations in ODI?

    Hi all,
    I'm new to ODI tool.My source is Flat files and target is Teradata.Please tell me what are the knowledge modules i have to use?
    In ODI tool what are the different types of transformatios are there.Advance thanks
    Regards
    suresh

    Hi,
    Check the following KMs
    LKM File to Teradata
    IKM File to Teradata
    ODI have some basic transformation like Joiner, Filter etc .
    You can refer the User Guide for details about these transformations .
    Thanks,
    Sutirtha

  • How to handle error for a file to file transform in ODI

    I am doing a lab for file to file transformation where source = CSV file and target = Flat file.
    1) When I am changing the datatype in source two files are getting created where one having the errored out data and the other having the errored message, how how to handle the errored data?
    2) If the target path is changed the session in ODI is showing as completed, it should error out. Here no files are created in source as earlier. Hoe to handle this type of error?

    Hi,
    I have used the following KMs in my transformation with the following options:
    IKM SQL Incremental Update
    INSERT    <Default>:true
    UPDATE    <Default>:true
    COMMIT    <Default>:true
    SYNC_JRN_DELETE    <Default>:true
    FLOW_CONTROL    <Default>:true
    RECYCLE_ERRORS    <Default>:false
    STATIC_CONTROL    <Default>:false
    TRUNCATE    <Default>:false
    DELETE_ALL    <Default>:false
    CREATE_TARG_TABLE    <Default>:false
    DELETE_TEMPORARY_OBJECTS     <Default>:true
    LKM SQL to SQL
    DELETE_TEMPORARY_OBJECTS    <Default>:true
    CKM Oracle
    DROP_ERROR_TABLE    <Default>:false
    DROP_CHECK_TABLE    <Default>:false
    CREATE_ERROR_INDEX    <Default>:true
    COMPATIBLE    <Default>:9
    VALIDATE    <Default>:false
    ENABLE_EDITION_SUPPORT    <Default>:false
    UPGRADE_ERROR_TABLE    true

  • How to handle error for a Db Table to Db table transform in ODI

    Hi,
    I have created two table in two different schema source and target, where there is a field for e.g.- place where the datatype is varchar2 and data inserted is string.
    In designer model of ODI i have put the type of place as number in both source and target and accordingly done the mapping.
    When it is executed it should give an error, but it got completed but no data is inserted neither in target table nor in error table in the target schema(E$_TARGET_TEST which is created automatically).
    Why the error is not given and how to handle such type of error..
    Please help.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', 'kol')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', 'jad')
    target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE

    Hi,
    I have used the following KMs in my transformation with the following options:
    IKM SQL Incremental Update
    INSERT    <Default>:true
    UPDATE    <Default>:true
    COMMIT    <Default>:true
    SYNC_JRN_DELETE    <Default>:true
    FLOW_CONTROL    <Default>:true
    RECYCLE_ERRORS    <Default>:false
    STATIC_CONTROL    <Default>:false
    TRUNCATE    <Default>:false
    DELETE_ALL    <Default>:false
    CREATE_TARG_TABLE    <Default>:false
    DELETE_TEMPORARY_OBJECTS     <Default>:true
    LKM SQL to SQL
    DELETE_TEMPORARY_OBJECTS    <Default>:true
    CKM Oracle
    DROP_ERROR_TABLE    <Default>:false
    DROP_CHECK_TABLE    <Default>:false
    CREATE_ERROR_INDEX    <Default>:true
    COMPATIBLE    <Default>:9
    VALIDATE    <Default>:false
    ENABLE_EDITION_SUPPORT    <Default>:false
    UPGRADE_ERROR_TABLE    true

  • Can we replicate Informatica Lookup transformation in ODI Interface?

    So was just wondering if we could replicate a lookup transformation in an ODI join of some sort where in only one value is returned for multiple matches of the same item....something like a max for a group or something while joining 2 tables...
    So for example I have 2 tables...A columns (Attendee_id,xyx,byx)
    B columns (partnerid,attendee_id,xyz,bxc)
    I want to join table A and B on Attendee_id and get the partner_id---but I want only one record in case of multiple matches for a particular attendee_id
    can we do this??

    No responses yet...Ive tried doing a left outer join...but it still returns all the matches...I want to return only one record....ne help from the xperts?

  • Data Loss in DB to DB Transformation in ODI

    Hi,
    I am facing data loss when I am trying a transformation for a DB to DB mapping in ODI.
    I have two tables in two different schemas with the following specifications. In ODI designer model of i have put the type of place as number in target and place as varchar2 for source and accordingly done the mapping.It works successfully when i am putting the data as ('12', 'ani', '12000', '55').
    Now for testing I am giving the datas as ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') and when I am executing it is giving the error as expected(ORA-01722: invalid number) in the task (Insert flow into I$ table). My C$ table is populated with the datas from source. But E$,I$ and target tables are not populated with the data.
    Now when I am puttting data in source as ('3', 'shubham', '12000', '56') and ('4', 'shan', '12000', '59') it is getting completed successfully , datas from C$ tables are deleted and data is inserted into the target table.
    Now my question is where are the datas ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') gone. If they are lost what is the recoverable table so that no data loss takes place.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', '55')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', '65t')
    Target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      NUMBER(9,0),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    Thanks.

    So, first you have data in "DEF"."SOURCE_TEST".
    You then run your interface, and the data is moved into "ABC"."TARGET_TEST" if the interface executes successfully with no errors.
    Correct? - no data loss
    But if you're saying that you need to handle records which are going to cause the "invalid number" error, then you should read up on 'flow' and 'static' control and how to flag errors before loading them. Flow and Static Control allows ODI to identify erroneous records prior to loading - they'll be put in the E$ table for you to deal with later.
    If you haven't already, I'd encourage you to take a look at the documentation on this:
    Implementing Data Quality Control

  • Diff between valid for consolidation and aggregation property in ODI column

    Hi John,
    I have a query regarding the columns when we revers the dimensions in ODI. In the columns tab of each dimension, there is valid for consolidation column. What does this column to when we select it. Coz any how i will be using the aggregation for plan type where i will be giving the consolidation operator. Could you please let me know the differences between them
    And also there is one more column Operation. What does this do

    Hi,
    You can ignore the "valid for consolidation column" as far as I am aware it is not used.
    Operation is for different types of load, these are
    Update (default)–Adds, updates, or moves the member being loaded.
    Delete Level 0–Deletes the member being loaded if it has no children.
    Delete Idescendants–Deletes the member being loaded and all of its descendants.
    Delete Descendants–Deletes the descendants of the member being loaded, but does
    not delete the member itself.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Transformation in ODI -  SQL command

    hello ..
    I would thank every body for helping out some people like me ..
    I create an interface and work successfully ..
    so I want to apply a filter on the source table ..
    my sours table1 has a column call ( price ) .. and there is some 0 on some records , I want to avoid it .. by applying a filter
    so what is the right SQL command shall I apply:
    when table1,price = 0 {
    table1.product and get the author heme of the product the have same attribute on (table2.MD) and the same year on different table ( table3.year) }
    please help me out here ... I am STUCK !

    thank you for quick answer ..
    sorry I couldn't explain my point very clear !
    THIS IS OUR TABLES ( SOURCE ) ..
    http://i43.tinypic.com/qs3yvl.png
    I want to estimated the price of ( car ) in table 1 ..
    by set the price of the car same as the anther product which has the same integer on ( N . TABLE 2 ) .. AND THE SAME (DATTE TABLE1)
    FOR EXAMPLE ON the figure ..
    I want the car price be the same house price and the same date ( I chose house because car and house has same record on ( N TABLE 2 ) )
    IS THIS CAN WORK by apply a filter ?
    Naif
    thank you

  • ODI Transformations

    Hi All,
    I am in the process of developing complex interfaces(mappings) using ODI.i did lot of development using other etl tool informatica.
    since i am in the initial stages....could you please help me in finding ways implement following transformations in ODI.
    1) Lookup tranformation
    2) Insert/update Transformation
    3) router transformation. etc...,
    and also suggest some good reference materials on the above topics.
    Thanks

    Hi,
    You have to work out your transformations in ODI designer .. "Diagram" and in "Flow" tabs.
    Now to do
    1. Lookup -- bring up the lookup tables in the datastore by reversing(same like ur source and target tables/views) and drag in the designer along with ur source for transformations. do ur txfm by either drag n drop or post ur query in the "Expression Editor" by selecting the target column.
    2. Insert/Update- will be controlled by ur ODI IKM`s (Incrementel Update)
    3. I dont know about router Txmf.
    Visit : http://www.oracle.com/technology/products/oracle-data-integrator/10.1.3/htdocs/1013_support.html for ODI docs
    Thanks

  • DATE Format Error in ODI-11g(11.1.1.3)

    I am using ETL transformations in ODI-11g. There's a dominant issue regarding date formattings when I map an ODI variable (storing date) with a TGT column(datatype=date) mappings.
    In all the src-tgt mappings I am formatting the date by using TO_DATE() functions. But still getting the error:
    "ORA-01830: date format picture ends before converting entire input string".
    A point to Note: The same ETL in other env are ruuning fine but in my new dev env it's giving this date error.
    I had checked with the DBA folks and they confirmed they set equal DATE settings in all the env.
    The ODI Variable is defined as an "Alphanumeric".
    Tx Used: #BUSINESS_CURRENT_DT=TO_DATE('Date','YYYY-MM-DD')
    Require some urgent advice...Please let me know

    Hi,
    W store name-value pair in the Control table from where we exctract our data. Both (param name and param value) are varchars.
    Well this looks pretty strange in 11g! Here's what we found out...
    If you are trying to retrieve a date variable by using TO_DATE() in the refresh query the ODI Java driver (JDK 1.6) would call java.sql.timestamp and gracefully attach HH:MI:SS.NS along with the date (YYYY-MM-DD HH:MI:SS.NS). My target ia a date always..!
    So, when I do: TO_DATE('20101010','YYYY-MM-DD) in the refresh query ODI stores it as '2010-10-10 00:00:00.0'
    For this the Load always fails as Oracle would not be able to interpret a timestamp by suing TO_DATE()
    The Java driver does this damage. However, it may be wise to store as a timestamp rather as a date if in case u do a Data capture and want the exact time credentials.
    Unfortunately not a req, as of now for us so I had to chop-off the timestamp..!
    Let me know if you find any other details...
    Thanks.!

  • How can I insert an aggregated column name as a string in the target table?

    I have a large source table, with almost 70 million records.  I need to pull the sum of four of the columns into another target table, but instead of having the same four target columns I just want to have two.
    So, let's take SUM(col1), SUM(col2), SUM(col3), and SUM(col4) from the source DB & insert them into the target like this:
    SOURCE_COLUMN
    | AMOUNT
    1 col1
    | SUM_AMOUNT
    2 col2
    | SUM_AMOUNT
    3 col3
    | SUM_AMOUNT
    4 col4
    | SUM_AMOUNT
    I know how to do this in four separate Data Flows using the source, an Aggregation Transformation, a Derived Column (to hard code the SOURCE_COLUMN label), and destination... but with this many records, it takes over 3 hours to run because it has to loop
    through these records four separate times instead of once.  Isn't there a way to do this with one Data Flow?  I'm thinking maybe Conditional Split?
    Any help is appreciated, thanks!

    Hi ,
      This could be achieved using UNPIVOT transformation. The below sample uses the below source query
    SELECT 1 AS COL1,2 AS COL2,3 AS COL3,4 AS COL4
    setup the UNPIVOT transformation as below
    The output of unpivot transformation will be as below
    Hope this helps.
    Best Regards Sorna

  • What is the use of lookup tranformation in odi

    Hi Experts,
    What is the use of lookup transformation in ODI.
    In ODI we use different kinds of joins, so my doubt is what is the difference between lookup transformation joins and normal joins we use in ODI.
    Please let me know with your valueable information.
    Thx,
    Sahadeva.

    Use the SIM card to connect to the cell carrier 3G network when there is no wifi available.
    To set it up, see this -> http://support.apple.com/kb/HT4157

  • Which is better ODI or OWB ?

    Hi,
    Which tool is better if both target and source is Oracle 10g?
    Suggestions are appreciate.
    Thanks.
    - Virag

    If your source and target are Oracle database then OWB is a better tool as it has more transformation than ODI .
    OWB is going to use the database engine for all the processes so it will be more efficient also .
    If source or target is some other database then go with ODI .
    Thanks,
    Sutirtha

  • Parallel processing in ODI

    Hi experts,
    I have 30+ files in text format, that hold partitions of the same table. These partitions can/must go through the same processing in parallel, and then must be appended back to a single large file.
    I'd have no problem designing the transformations in ODI (since I have the SQL code), but I'm looking for an "elegant" way to tackle the problem. By elegant I mean modular, with minimal replication of work, etc.
    Thank you very much.
    Joan

    1. Create a dummy data-store in ODI with column structure mimicking your file structure under file model
    2. Duplicate this data-store in Oracle model.
    3. Duplicate LKM file to oracle using loader and modify it in such a way that both target table and data file/path comes from KM options (add these options if they are not there already).
    4. Design one interface with these data stores and new LKM. Use an IKM that does a straight create target table and load without bothering to create I$ (IKM SQL Control Append seems ideal but you may need to modify to make sure target table comes from variable). Use two variables (var2 and var3) to specify the new options in LKM (and IKM if required).
    5. Create a scenario.
    6. Create an Oracle table that contains a sequence, target_table_name and source_file_with_path.
    7. Create a package, that sets the variable (say var1) to 1, use other two variables to store target_table_name and source_file_with_path based on var1 and call your scenario. Check for error and upon success, increment var1 and loop.
    8. At the end, create your big table with partitions and write a ODI procedure to loop through target tables and do a partition exchange.
    That is the most modular approach I can think of.

  • Group by in ODI

    Hi ,
    How to implement Aggreagtor transformation on ODI.
    I want use group by on particular column.
    Regards,
    Srinivas

    ODI will generate group by clause as and when you use any aggrigation in your mapping .
    Group by clause will be applied on all columns which are part of select /having clause but no aggrigation applied on them .
    Suppose you want to select maximun salary per deptno then you need to put MAX(SAL) and you need to select DEPTNO as well .
    Odi will generate code like
    select deptno,max(sal) from <table_name> group by deptno

Maybe you are looking for

  • OMS service does not start?

    I installed the oracel 8i 8.1.6 and enterprise manager on XP professional pentium 4 it installed ok. then I installed the oracle 9i DS suite downloaded form oracle site, I created a reposatory to work with designer but it didn't worked and i couldn't

  • PDF form field validation - 2 criterias?

    Does anyone know how can I get a numerical pdf form field to validate within a range as well as validate that the value is equal to or less than the value of another numerical field? Thanks!

  • [SOLVED]Pacman update fails

    Hi, Yesterday everything was working great but today it isn't anymore. The problem I did pacman -Syu Output: sudo pacman -Syu :: Synchronizing package databases... core is up to date extra is up to date community is up to date archlinuxfr is up to da

  • UNABLE to BUY or order book - "Incomplete Book"

    I CANNOT seem to resolve this error message of INCOMPLETE BOOK no matter what i do .. HELP it says "Your book seems to have frames on one or more pages that do not contain photos. You must either change the layout of those pages or place photos in th

  • CS6: exporting "range of time markers" without setting in-/out point or defining working area?

    Hi, in CS6 I defined several "range of time markers" to get a better overview of my project. Now I want to export each range as a single video. The only way to do this is  creating in-/out points  or setting the working area (what is the English tran