Aggregating Slowly Changing Dimension

Hi All:
I have a problem with whole lot of changes in the dimension values (SCD), need to create a view or stored procedure:
Two Tables within the Oracle db are joined
Tbl1: Store Summary consisting of Store ID, SUM(Sales Qty)
Tbl2(View): Store View created which consists of Store ID, Name, Store_Latest_ID
Join Relationship: Store_summary.Store_ID = Store_View.Store_ID
If I’m pulling up the report its giving me this info
Ex:
Store ID: Name, Sales_Qty , Store_Latest_ID
121, Kansas, $1200, 1101
1101, Dallas, $1400, 1200
1200, Irvine, $ 1800, Null
141, Gering, $500, 1462
1462, Scott, $1500, Null
1346,Calif,$1500,0
There is no effective date within the store view, but can be added if requested.
Constraints in the Output:
1)     If the Store Latest ID = 0 that means the store id is hasn’t been shifted (Ex: Store ID = 1346)
2)     If the Store Latest ID = ‘XXXX’ then that replaces the old Store ID and the next records will be added to the db to the new Store ID ( Ex: 121 to 1101, 1101 to 1200, 141 to 1462)
3)     Output Needed: Everything rolled up to the New Store ID irrespective of the # of records or within the view or store procedure whenever there is a Store Latest ID that should be assigned to the Store ID (Ex: the Max Latest Store ID Record for all the changing Store ID Values) and if the value of Latest Store ID is 0 then no change of the record.
I need the output to look like
Store ID: Name, Sales_Qty , Store_Latest_ID
1200,Irvine,$4400,Null
1462,Scott,$2000,Null
1346,Calif,$1500,Null or 0
The Query I wrote for the view creation:
Select ss.Store_ID, ss.Sales_Qty, 0 as Store_Latest_ID
From Store_Summary ss, Store_Details sd
Where ss.Store_ID=sd.Store_ID and sd.Store_Latest_ID is null
union
Select sd.Store_Latest_ID, ss.Sales_Qty, null
From Store_Summary ss, Store_Details sd
Where ss.Store_ID=sd.Store_Latest_ID and sd.Store_Latest_ID is not null
And placing a join to the created view to Store Summary ended up getting the aggreagation values without rolling up and also the Store ID's which are not having latest ids are ending up with a value 0 and the ss quantity aggregated, and if there are changes within store id for more than two times then its not aggreagating the ss quatity to the latest and also its not giving the store name of the latest store id.
I need help to create a view or stored procedure
Please let me know if you have any questions, Thanks.
Any suggestions would be really Grateful.
Thanks
Vamsi

Hi
Please see the following example
ID- Name -Dependants
100 - Tom - 5
101 - Rick -2
102 - Sunil -2
See the above contents...assume the ID represents employee ID and the dependants include parents, spouse and kids....
After sometime, dependants may increase over a period of time but noone is sure when exactly it will increase.....assume in case of a single get married and increase in dependants
So the attributes of the Employee had a slow chance of changing over the time
This kind of dimensions are called slowly changing dimensions
Regards
N Ganesh

Similar Messages

  • Not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c

    not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c
    But i'm able to see other IKM's please help me, how can i see them

    Nope, It has not been altered.
    COMPONENT NAME: LKM Oracle to Oracle (datapump)
    COMPONENT VERSION: 11.1.2.3
    AUTHOR: Oracle
    COMPATIBILITY: ODI 11.1.2 and above
    Description:
    - Loading Knowledge Module
    - Loads data from an Oracle Server to an Oracle Server using external tables in the datapump format.
    - This module is recommended when developing interfaces between two Oracle servers when DBLINK is not an option.
    - An External table definition is created on the source and target servers.
    - When using this module on a journalized source table, the Journaling table is first updated to flag the records consumed and then cleaned from these records at the end of the interface.

  • SQL Server Agent Jobs error for Slowly changing dimension

    Hi,
    I have implemented Slowly changing dimension in 5 of my packages for lookup insert/update.
    All the packages are running good in SSDT. And when i deployed the project to SSISDB and run the packages all are running successfully. But when i created a job out of that and run the packages, then 3 packages ran successfully and 2 packages failed. 
    When i opened All Execution Report. I found the following error:
    Message
    Message Source Name
    Subcomponent Name
    Process Provider:Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description:
    "Login timeout expired". An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "A network-related or instance-specific error has occurred while establishing
    a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". An OLE DB record is available. 
    Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Named Pipes Provider: Could not open a connection to SQL Server [53]. ".
    Process Provider
    Slowly Changing Dimension [212]
    Then i opened Provider package in SSDT and changed the source reading record limit from 4,00,000 to 15,000 in source query and deployed again and run, then the job succeeded. more than 15,000 failed.
    And in the 2nd experiment, I removed slowly changing dimension task and implemented normal lookup for insert/update, and set the source reading limit again to 4,00,000 and deployed again and run, then the job succeeded.
    Now i am not able to figure out, what exactly is the problem with Slowly changing dimension task for more than 15,00 records in SQL Server  Agent Job run?
    Can anybody pls help me out.
    Thanks
    Bikram

    Hi Vikash,
    As i have mentioned in the above post, below 2 scenarios: 
    "Then i opened Provider package in SSDT and changed the source reading record limit from 4,00,000 to 15,000 in source
    query and deployed again and run, then the job succeeded. more than 15,000 failed.
    And in the 2nd experiment, I removed slowly changing dimension task and implemented normal lookup for insert/update, and set the source reading limit again to 4,00,000 and deployed again and run, then the job succeeded."
    That means i am able to connect to sql server.
    But if i change the 1st scenario and read 4,00,000 records, the job fails and shows the above mentioned error.
    Similarly in the 2nd scenario, if i implement SCD look up,  the job fails and shows the above mentioned error.
    And i am consistently reproducing this.
    Thanks
    Bikram

  • How to implement mapping for a slowly changing dimension

    Hello,
    I don't have any experience with OWB and I need some help.
    I just don't know how to create the ETL process for a slowly changing dimension.
    My scenario is that I have 2 operative systems providing customer information, a staging area and a dwh with a customer dimension with SCD type 2 (created within OWB).
    The oltp data is already transferred to the staging area. But how should the mapping for the dwh table look like? Which operators have to be used?
    I have to check whether the customer record is new or just updated. How can I check every attribute? A new record shall be loaded, an updated record shall be historized (as I configured it in the SCD type 2). I just don't know how the trigger of the SCD is activated. Do I have to try an update on the trigger attribute and then automaticalle a new record is created? But with which operator can I do this? How should the mapping look like? Or is this impossible and do I have to implement this functionality with SQL code only?
    I know how to implement this with SQL code, but my task is to implement this in OWB.
    As you see I did not understand the logic of OWB so far and I hope somebody can help me.
    Greetings,
    Joerg

    Joerg,
    Check the blog below which provides good detail and also check the OWB documentation
    http://www.rittmanmead.com/2006/09/21/working-through-some-scd-2-and-3-examples-using-owb10gr2/
    Thanks,
    Sam.

  • Facing problem in loading table using IKM Oracle Slowly Changing Dimension

    Hi,
    I am facing problem in loading dimension table using IKM Oracle Slowly Changing Dimension
    Following is the setup :-
    SRC :- source_table (MSSQL)
    Staging :- staging_table (MSSQL)
    TRGT :- target_table (Oracle)
    -------- source_table
    group_id     int
    group_version_id     int
    name     varchar (255)
    description     varchar (255)
    comments     varchar (2000)
    ref_number     varchar (255)
    is_latest_version     decimal (5)
    is_deleted     decimal (5)
    --------- target_table
    id     number (38,0) - Mapped to <%=odiRef.getObjectName( "L" , "SEQ_NAME" , "D" )%>.nextval
              - Executed on target
              - defined the column as SK in model
    group_id     number (38,0) - defined the column as NK in model     
    group_version_id     number (38,0) - defined the column as NK in model     
    name     varchar (255) - undefined on the model description
    description     varchar (255) - Add row on change
    comments     varchar (2000) - Add row on change
    ref_number     varchar (255) - Add row on change
    is_latest_version     number (1,0) - Add row on change
    is_deleted     number (1,0) - Add row on change
    start_datetime     date     - SYSDATE
                   - Executed on target
                   - Starting Timestamp
    end_datetime     date     - NULL
                   - Executed on target
                   - End Timestamp
    I am using following KM's:-
         LKM SQL to SQL
         IKM Oracle Slowly Changing Dimension
         CKM SQL
    it gives me the following error -
    920:Invalid relational operator

    Hi,
    Yes, this is a run-time error. Currently I am debugging it by checking SNP_SESS_TXT_LOG based on sess_no ID.
    Now, I get the following error.
    I just see the following in the operator:-
    911 : 42000 : java.sql.BatchUpdateException: ORA-00911: invalid character
    911 : 42000 : java.sql.SQLException: ORA-00911: invalid character
    java.sql.BatchUpdateException: ORA-00911: invalid character
         at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:342)
         at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10720)
         at com.sunopsis.sql.SnpsQuery.executeBatch(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    So, I do not get any idea of the exact step that is causing failure.
    Is there any setting in the operator that I am missing on?

  • How to Model Slowly changing dimension ?

    Hello Gurus,
    I would like to know what a slowly changing dimension is, which scenarios is it used  and also that how it is modelled.
    Also please send screen shots.([email protected])
    Thanks and best wishes,
    raj

    Hi,
    Slowly changing dimensions  - Data that changes over a period of time or dimenions that have data that changes slowly. e.g. Person responsible for a cost center. You cannot change that person daily, it is changed over a period of time.
    Slowly changing dimensions are of different types: TYPE 0, 1,2,3 and 4.
    PB

  • Types of dimensions like Line item & Slowly changing dimension?

    Hi All,
    Please explain about the Line item dimension & Slowly changing dimension.
    Can anyone give the scenarios for these dimensions?
    Other than these, we have like datapacket, time & unit dimensions.
    Is there any other dimensions we have?
    Thanks in Advance.

    Hi,
    Line Item Dimension-
    A Line Item dimension is the dimension which has only one info object assigned. In this case, the DIM table for this dimension is not created. This way the lookup of data from this dimension is a little faster. An info object (eg: Sales Orders, customers) which can have huge number of data can be made line item dimensions as it will take less time looking up the data while running the reports.
    In general,If the size of dimension table exceeds more than 20% of the size of the fact table,We mark it as Line item dimension.It depends upon the scenario.It helps in improving the overall data loading performance.
    From Help.Sap
    1. Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
    ¡ When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
    ¡ A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
    Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
    Line Item: This means that the dimension contains exactly one characteristic. If you set this indicator, no dimension table is created,but rather the SID table of the characteristic takes over the role of dimension table. Omitting the dimension table has the following advantages:
    When loading transaction data, no IDs for the entries in the dimension table have to be generated. This number range operation can lead to a decrease in performance in the case of a degenerated dimension.
    In the star schema of the InfoCube a possibly very large table, depending on the cardinality, is omitted. The SQL-based queries are therefore more simple. In many cases, the database optimizer can choose better execution plans.
    However, there is also one disadvantage: A dimension indicated as a line item cannot accept additional characteristics at a later time. This is only possible for normal dimensions.
    Example:
    A line item is an InfoObject such as an order number, for whose attributes one or a few facts are listed in the fact table of the InfoCube.
    Refer this.
    /people/juergen.noe/blog/2007/12/07/spot-on-line-item-dimensions
    should help you out.
    Slowly Changing Dimension
    This comes into picture when you have something which chenages over a period of time.For example,the Designation of an employee.Say,When an employee joins as an Associate and gradually gets promoted to Project Manager,this can be put into Slowly changing Dimension.
    I hope thiis helpful for you.Do assign points if you find it helpful.
    Regards,
    Amar

  • Slow change dimension

    Hi al,
    How slow changing dimension will support SID?
    The following is the advantages with SLOW CHANGING DIMENSION.
    1)language support
    2) master data independent of infosource
    3) uses numeric as indexes for faster access.
    plz explain the above points with example . i did not get that point.
    plz explain in detail. I will assign fulll points.
    Thanx & Regards,
    RaviChandra
    Edited by: Ravichandra.bi on Sep 20, 2011 10:59 AM

    Hi Ravichandra,
    I feel that the advantages that you have mentioned above also applies to dimensions of a cube in general. Slowly changing dimensions is data modelling concept in my view not physically present in the system.
    But thanks to your question i was able to deep dive into this concept and increase my knowledge.
    I found this link helpful http://en.wikipedia.org/wiki/Slowly_changing_dimension
    Best Regards,
    Kush Kashyap

  • Workspace Manager & Slowly Changin Dimension in Data Warehouse

    Has anyone used Workspace Manager in a Slowly Changing Dimension to keep history in a Data Warehouse.
    What issues have you come across?
    What are advantages?
    What are disadvantages?
    Thanks
    Uli

    Hi Ben,
    thanks for getting back on this and sorry for not updating earlier.
    I had a brief scan through the documentation and Workspace Manager seems to be quite powerful. When I was reading through the docs originally I had the idea that this might be useful to reduce coding and improve performance in a Slowly Changing Dimension (type 2) DW scenario. I would hope for a performance improvement in both loading and querying data. So I thought before I get my hands dirty and do some tests I would check with the community to see if this is something that has been done before.
    To answer your specific questions: I would like to keep every change. I would not need the primary workspace functionality for merging and version control etc. It would just be to keep history.
    Again thanks for your efforts.
    Uli

  • Slowly changing Dimentions

    How to deal With Slowly changing Dimentions
    Thanks in advance
    Bhaskar

    Hi Vijay,
    There are four such senarios like..
    Slowly changing dimensions: let us suppose take Material (pen) belongs to material group(utility) in 1990 and it is changed to other group(daily utility) in 1995 ..
    1) User needs data at the time of transaction only : then keep the both objects in same dimension table
    2) User needs data at any time of transaction: means user will give some date then we need to fetch that group for that particular material in that date frame..then you need to keep matgroup as time-dependent navigational attribute of material...
    Like these cases you can find out in the modelling sap material with the name tracking history...
    Don't forget to Assign points..If it is useful..Points are always energy boosters for the helpers...

  • Logical column - Aggregation - based on dimensions

    Hi,
    Can anyone explain the working of the aggregation based on dimension functionality. I understand it is different from level - based measures.
    Thanks in advance.
    - Priti

    Hi priti,
    Aggregation based on dimension .
    Hav a look at this you get an idea http://gerardnico.com/wiki/dat/obiee/measure_level_based
    UPDATED POST
    Sorry i gave you the wrong link.This is what your looking for
    http://gerardnico.com/wiki/dat/obiee/aggregation_based_on_dimension
    follow this etiquette as your new http://forums.oracle.com/forums/ann.jspa?annID=939
    Cheers,
    KK
    Edited by: Kranthi on Feb 24, 2011 12:20 AM

  • Changing Dimensions of menu items in menubar

    I wanted to make the menu items have an auto width instead of
    a fixed with. So I followed the direction in the Help file under
    "Change dimensions of menu items" to change the .css. The menu
    items now seem to stretch to fit the width of the text, but the
    secondary menus now display horizontally instead of vertically
    below the top menu item. This is only in Explorer 6 & 7. Works
    fine in Firefox.

    Hi Kayo,
    You'll want to checkout these samples:
    http://labs.adobe.com/technologies/spry/samples/menubar/AutoWidthHorizontalMenuBarSample.h tml
    http://labs.adobe.com/technologies/spry/samples/menubar/AutoWidthVerticalMenuBarSample.htm l
    to see what browser bugs you're up against. :-)
    --== Kin ==--

  • Different aggregation for different Dimensions

    Hello,
    is it possible to have different aggregations on different dimensions.
    I have following situation:
    I have a measure per client and day.
    I'm interested in the maximum per month from the daily sums over clients.
    In the measure properties I can only choose between Maximum and Sum in general but not per Dimensions.
    To clearify what i mean here is some sample data.
    * * Client A * Client B *
    * 2014-11-28 * 7 * 8 * SUM() = 15
    * 2014-11-29 * 6 * 8 * SUM() = 14
    * 2014-11-30 * 6 * 10 * SUM() = 16 <-- monthly max
    * 2014-12-01 * 7 * 8 * SUM() = 15
    * 2014-12-02 * 5 * 12 * SUM() = 17 <-- monthly max
    * 2014-12-03 * 6 * 9 * SUM() = 15
    This data is stored in my fact table with reference to date and client dimensions.
    This example data would have to be reported as:
    /* Report on measure
    * * Measure *
    * 2014-11 * 16 *
    * 2014-12 * 16 *
    * Report on measure per client
    (max per client and month)
    * * Client A * Client B *
    * 2014-11 * 7 * 8 *
    * 2014-12 * 7 * 12 *
    Can this be achieved with SSAS? Didn't find any property for that on the measure.
    Best Regards,
    Thomas

    Hi Thomas,
    According to your description, you want to calculate different aggregation for different dimensions, right?
    Based on your scenario, I tested it on AdventureWorks cube, the query below is for you reference.
    with member [Customer].[Country].[USA & Canada] as
    Aggregate( { [Customer].[Country].&[United States],
    [Customer].[Country].&[Canada]
    member [Measures].[MaxAmount]
    as
    max([Date].[Calendar].currentmember.children,[Measures].[Internet Sales Amount])
    select {[Customer].[Country].&[United States],[Customer].[Country].&[Canada],[Customer].[Country].[USA & Canada]} on 0,
    [Date].[Calendar].[Month].members on 1
    from
    [Adventure Works]
    where [Measures].[MaxAmount]
    Here is similar thread with yours, please see:
    https://social.technet.microsoft.com/Forums/en-US/1bd493ef-f957-4fd5-916b-ee60639106c3/calculated-member-different-aggregations-on-different-dimensions?forum=sqlanalysisservices
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • Slow changing dimensions in application express GUI

    I need some kind of slow changing dimensions for application express GUI in order to be able
    1) simply allow users to edit 1 row for 1 object in APEX GUI
    2) widely use historical data in reporting.
    I.E.:
    Client's Last name (address, etc) was changed. I need to show in reports two points: before change and after:
    ID Name Order_date Amount
    12 Clark 1/1/10 500
    12 Johnson 5/1/10 200
    Are there common solutions?

    There are probably a few approaches you can take:
    1) Fine Grained Auditing (FGA) - very extensive and can be completely customized.
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/cfgaudit.htm#sthref1766
    2) Triggers - if you have basic needs on a few tables/columns, it may be easier to just roll your own trigger. You can compare the :old to the :new value, and if they differ, write both to your own auditing table.

  • Incremental partition processing with changing dimensions?

    today i tried out an incremental processing technique on my cube. I have a partition by date which had 100 rows and an account dimension which had 50 rows.
    i executed a process full and then added 10 rows to the fact and modified 2 rows in the dimension as well as adding 10 rows to the dimension...
    i imagined that I could just do a process full on the dimension and process update on the partition, but upon doing that my cube was in an "unprocessed" state so i had to perform a process full...is there something i did wrong or do updates to dimensions
    require full rebuilds of all partitions?
    this was just an example on small data sets. in reality i have 20+ partitions and 500m rows in the fact table and 90m in the dimension.
    thanks in advance!
    Craig

    ".. i imagined that I could just do a process full on the dimension and process update on the partition, but upon doing that my cube was in an "unprocessed" state so i had to perform a process full .." - try doing a ProcessUpdate on the dimension
    instead. This paper explains the difference:
    Analysis Services 2005 Processing Architecture
    ProcessUpdate applies only to dimensions. It is the equivalent of incremental dimension processing in Analysis Services 2000. It sends SQL queries to read the entire dimension table and applies the changes—member updates, additions,
    deletions.
    Since ProcessUpdate reads the entire dimension table, it begs the question, "How is it different from ProcessFull?" The difference is that ProcessUpdate does not discard the dimension storage contents. It applies the changes in a "smart" manner that
    preserves the fact data in dependent partitions. ProcessFull, on the other hand, does an implicit ProcessClear on all dependent partitions. ProcessUpdate is inherently slower than ProcessFull since it is doing additional work to apply the changes.
    Depending on the nature of the changes in the dimension table, ProcessUpdate can affect dependent partitions. If only new members were added, then the partitions are not affected. But if members were deleted or if member relationships changed (e.g.,
    a Customer moved from Redmond to Seattle), then some of the aggregation data and bitmap indexes on the partitions are dropped. The cube is still available for queries, albeit with lower performance.
    - Deepak

Maybe you are looking for

  • EWA GCA for non-ABAP

    Dear all, I've setup the EWA for our SAP Enterprise portal system (non-ABAP), i've already refer to the note 976054 but along the way I am facing some issues, hope anyone here that has such experience or knowledge could share it with me. Here I would

  • Making a query in Forms 6i to insert values

    Hi everyone, I have 2 Data Blocks in Forms 6i, one is not a Database Data Block and the other one is. In my first Data Block I have 3 items: teacher_id, grade and class_id Then, the second Data Block has 10 items where I'm going to be showing 10 rows

  • Delete option coming in child System

    We have recently implemented CUA in our landscape.Now that we are able to see delete option available in one of the child system which is not there in the initial stages and not in any othere child systems.Can any one help me in finding the reason fo

  • Printing out to a gui

    Hi, Can someone give me a hand with this?? for (int b = 0; b < a.length; b++)           ta = new JTextArea(a); this is going through my array of names and I want them to print out but when I run the program it keeps over writing the line and I end up

  • WiSM rebooting randomly

    I have 3 WiSMs two of them (WiSM-a and WiSM-b) are seated on the CORE1 switch (6509) and the WiSM-c is seated on CORE2 also a 6509 . One of the controllers from WiSM-a/b keeps rebooting let's called it Con-1a and Con-2b reboots but usually more than