Populating Dimension Table

Hi
I want to populate a dimension table.
here is the code i am trying to
THIS IS SV_DIM  DIMENSION TABLE
create table SV_DIM (
SV_KEY NUMBER(15),
SV_TYPE_CD VARCHAR2(50),
SV_TYPE_ROLLUP VARCHAR2(50),
SV_DIM_DESC VARCHAR2(100),
        PRIMARY KEY (SV_KEY)
I have created sequence using
CREATE SEQUENCE SVC_SEQ
INCREMENT BY 1
START WITH 1
NOMAXVALUE
NOMINVALUE
NOCYCLE
NOORDER;
CREATE TABLE  SV_TS (
    SV_GRP_CD VARCHAR2(2) NOT NULL,
    ALPHA_SVC_CD VARCHAR2(2) NOT NULL,
    SV_PKG_CD VARCHAR2(4) NOT NULL,
        PRIMARY KEY (ALPHA_SVC_CD)
Sample data for SV_TS table
SV_GRP_CD    ALPHA_SVC_CD     SV_PKG_CD
KO                           KO                       0101
KO                           KP                       0102
KO                           KB                       0103
GO                          GT                       0104
GO                          GW                       0105
GO                          GL                       0106
TO                          TN                       1901
TO                           TX                       0125
FT                           FS                       0301
FT                           FP                       0302
FT                           FO                       0111
I want to Populate my dimension table using this SV_TS table
My dimension Table will look like this
SV_KEY             SV_TYPE_CD        SV_TYPE_ROLLUP    SV_DIM_DESC
  1                       KO             NULL                            SVC
  2                       GO         NULL                             SVC
  3                       FT                NULL                            SVC
4                       TO             NULL                            SVC
I am writing the query like this
insert into SV_DIM
select
SVC_SEQ.NEXTVAL SV_KEY,
DISTINCT SV_GRP_CD SV_TYPE_CD,
NULL SV_TYPE_ROLLUP,
'SVC' SV_DIM_DESC
from
SV_TS
But I am getting error at distinct.
ERROR: ORA 00936:missing expression.ORACLE VERSION 10.2
Thanks
Sri
Edited by: SriSri on Jun 22, 2010 1:39 PM

SriSri wrote:
but my doubt is...
In which conditions we have to use this type of queries...?
I need some clear idea where and when we can use this type of queries..That's it.You can't use sequence in aggregation queries, ie those that group or aggregate the data;
select my_sequence.nextval, max(1)
from dual
ORA-02287: sequence number not allowed here
select distinct my_sequence.nextval
from dual
ORA-02287: sequence number not allowed hereIf the query has an aggregate function, ie MIN, MAX, SUM or a GROUP BY clause or the DISTINCT key word you cannot use NEXTVAL. There are other restrictions in the documentation.

Similar Messages

  • Populating Fact and Dimension Tables

    Hi there,
    New to OLAP. I've defined my schema design and would like to start population my dimension tables along with the foreign key references in my fact tables to my dimension tables.
    Before I begin writing a bunch of PL/SQL scripts to do this. I am wondering if any tools exist to help expedite this type of process.
    As a start I will be linking my fact table to two dimensions. One being a standard date dimension, the other being a user dimension consisting of hours and minutes. Any examples, links would be greatly appreciated.
    Thanks!

    Hi Chandra, it would be useful if you read a little bit about BI before start working with a BI tool.
    Anyway, I'll try to help you.
    A dimension is a structure that will be join to the fact table to so that users can analyze data from different point of views and levels. The fact table will most probably (not always) have data at the lowest possible level. So, let's suppose you have SALES data for different cities in a country (USA). Your fact table will be:
    CITY --- SalesValue
    Miami --- 100
    NYC --- 145
    Los Angeles --- 135 (because Arnold is not managing the State very well ;-) )
    You will then can have a Dimension table with the "Country" structure. This is, a table containing the different cities along with their states, counties, and finally the country. So your dimension table would look like:
    NATION --- STATE --- COUNTY --- CITY
    USA --- Florida --- Miami --- Miami
    USA --- NY ---- NY --- NYC
    USA --- Los Angeles --- LA --- Los Angeles
    This dimension will allow you to aggregate the data at dffirent lavels. This is, the user will not only be able to see data at the lowest level, but also at the County, State and Country level.
    You will join your fact table (field CITY) with your dimension table (field CITY). The tool will then help you with the aggregation of the values.
    Ihope this helps.
    J.-

  • Dimension Table populating data

    Hi
    I am in the process of creating a data mart with a star schema.
    The star schema has been defined with the fact and dimension tables and the primary and foreign keys.
    I have written the script for one of the dimensions and would like to know when the job runs on a daily basis should the job truncate the table every day and rebuild the dimension table or should it only add new records to the table. If it should add only
    new records to the table how do is this done?
    I assume that the fact table job is run once a day and only new data is added to it?
    Thanks

    It will depend on the volume of your dimensions. In most of our projects, we do not truncate, we update only updated rows based on a fingerprint (to make the comparison faster than column by column), and insert for new rows (SCD1). For SCD2 we apply
    similar approach for updates and inserts, and expirations in batch (one UPDATE for all applicable rows at the end of the package/ETL). 
    If your dimension is very large, you can consider truncating all data or deleting only affected modified rows (based on nosiness key) to later reload those, but you have to be carefully maintaining the same surrogate keys reference by your
    existing facts.
    HTH,
    Please, mark this post as Answer if this helps you to solve your question/problem.
    Alan Koo | "Microsoft Business Intelligence and more..."
    http://www.alankoo.com

  • Problem with populating a fact table from dimension tables

    my aim is there are 5 dimensional tables that are created
    Student->s_id primary key,upn(unique pupil no),name
    Grade->g_id primary key,grade,exam_level,values
    Subject->sb_id primary key,subjectid,subname
    School->sc_id primary key,schoolno,school_name
    year->y_id primary key,year(like 2008)
    s_id,g_id,sb_id,sc_id,y_id are sequences
    select * from student;
    S_ID UPN FNAME COMMONNAME GENDER DOB
    ==============================
    9062 1027 MELISSA ANNE       f  13-OCT-81
    9000 rows selected
    select * from grade;
          G_ID GRADE      E_LEVEL         VALUE
            73 A          a                 120
            74 B          a                 100
            75 C          a                  80
            76 D          a                  60
            77 E          a                  40
            78 F          a                  20
            79 U          a                   0
            80 X          a                   0
    18 rows selectedThese are basically the dimensional views
    Now according to the specification given, need to create a fact table as facts_table which contains all the dim tables primary keys as foreign keys in it.
    The problem is when i say,I am going to consider a smaller example than the actual no of dimension tables 5 lets say there are 2 dim tables student,grade with s_id,g_id as p key.
    create materialized view facts_table(s_id,g_id)
    as
    select  s.s_id,g.g_id
    from   (select distinct s_id from student)s
    ,         (select distinct g_id from grade)gThis results in massive duplication as there is no join between the two tables.But basically there are no common things between the two tables to join,how to solve it?
    Consider it when i do it for 5 tables the amount of duplication being involved, thats why there is not enough tablespace.
    I was hoping if there is no other way then create a fact table with just one column initially
    create materialized view facts_table(s_id)
    as
    select s_id
    from student;then
    alter materialized view facts_table add column g_id number;Then populate this g_id column by fetching all the g_id values from the grade table using some sort of loop even though we should not use pl/sql i dont know if this works?
    Any suggestions.

    Basically your quite right to say that without any logical common columns between the dimension tables it will produce results that every student studied every sibject and got every grade and its very rubbish,
    I am confused at to whether the dimension tables can contain duplicated columns i.e column like upn(unique pupil no) i will also copy in another table so that when writing queries a join can be placed. i dont know whether thats right
    These are the required queries from the star schema
    Design a conformed star schema which will support the following queries:
    a. For each year give the actual number of students entered for at A-level in the whole country / in each LEA / in each school.
    b. For each A-level subject, and for each year, give the percentage of students who gained each grade.
    c. For the most recent 3 years, show the 5 most popular A-level subjects in that year over the whole country (measure popularity as the number of entries for that subject as a percentage of the total number of exam entries).
    I written the queries earlier based on dimesnion tables which were highly duplicated they were like
    student
    =======
    upn
    school
    school
    ======
    school(this column substr gives lea,school and the whole is country)
    id(id of school)
    student_group
    =============
    upn(unique pupil no)
    gid(group id)
    grade
    year_col
    ========
    year
    sid(subject id)
    gid(group id)
    exam_level
    id(school id)
    grades_list
    ===========
    exam_level
    grade
    value
    subject
    ========
    sid
    subject
    compulsory
    These were the dimension table si created earlier and as you can see many columns are duplicated in other tables like upn and this structure effectively gets the data out of the schema as there are common column upon which we can link
    But a collegue suggested that these dimension tables are wrong and they should not be this way and should not contain dupliated columns.
    select      distinct count(s.upn) as st_count
    ,     y.year
    ,     c.sn
    from      student_info s
    ,     student_group sg
    ,     year_col y
    ,     subject sb
    ,     grades_list g
    ,     country c
    where      s.upn=sg.upn
    and     sb.sid=y.sid
    and     sg.gid=y.gid
    and     c.id=y.id
    and     c.id=s.school
    and      y.exam_lev=g.exam_level
    and      g.exam_level='a'
    group by y.year,c.sn
    order by y.year;This is the code for the 1st query
    I am confused now which structure is right.Are my earlier dimension tables which i am describing here or the new dimension tables which i explained above are right.
    If what i am describing now is right i mean the dimension tables and the columns are allright then i just need to create a fact table with foreign keys of all the dimension tables.

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • How fact links to Dimension Tables.

    Hi All,
    Can any one explain how the ROW_WID of the dimension table and the related column in the fact table both gets populated with same values in different mapping ?
    Consider if we want to develop a new fact and dimension tables, then how should i proceed for the ROW_WID and the join between fact and dimension.
    Thanks in Advance.
    Regards
    Vishwanath

    Hi
    Basically in BI Apps all dims are first populated and every dim table has a row wid column apart from the primary key combination(datasourceid+integration id) .If you open a seeded mapping and you would see this row wid column is populated using a reusable sequence generator transformation.So everytime u run this mapping the row wids change..........
    Now coming to the facts............as a normal rule of thumb in ETL u run the facts after all the dimensions........If you open any seeded fact mapping you will observe that every fact has a wid associated the dims it joins to.......so if a fact is joining to 3 dims then there will be 3 wids associated to each dim .These wids are again populated using a lookup on the dimension table and extracting row wid for relavant id of the dim.So u get the value of row wid of the dimension in the wid column of the fact....Please let me know if u hv questions
    With Regards
    Venkatesh

  • Populating Fact tables

    Hi,
    First time working on DW. Have got my staging and Dimension table setups.
    About to start working on Fact tables and all examples I see online are using SSIS packages with several lookup tables.
    Is there any reason that this cannot be done using Stored procedure or I have to develop SSIS package to populate fact tables.
    Thanks,

    Hello,
    In fact we are using stored procedure for all our manipulations and populating fact table. The fact table is partitioned and works great with no issues.
    We are calling the stored procedure and PL/SQL procedure from Informatica. I don't find any issue in processing but I feel difficulty in debugging 5000 lines of code.
    If the transformation are simple and it can be handle in stored procedure than you can go for it But SSIS is designed for ETL. 
    In considering maintenance overhead SSIS gives better control over the data flow because business logic and debugging is much easier than doing the same in stored procedure. 
    If you've partitions and properly designed data structures definitely you can think of using SP. 
    -Prashanth

  • How to find the common dimension tables used in Finance AR/PR and GL,

    Hi All,
    I need to find the common dimension table used in the Finance modules (AP,AR,PO,GL...), Since separate teams are working on populating each modules, and we need to find the dependency so that our loads shouldn’t impact others.
    Thanks!
    OBI

    Hi Guys,
    Thanks for all your answers.
    Yes....You are all right. We can list out the used tables upto certain extent. Anyhow, I have done some R&D to derive the SQL's which is given below:
    SELECT TABLE_NAME FROM USER_TABLES
    MINUS
    SELECT DISTINCT UPPER(REFERENCED_NAME)
    FROM user_dependencies
    where
    referenced_type='TABLE' and UPPER(NAME) in
    select distinct UPPER(object_name) from user_objects where UPPER(object_type) in
    'MATERIALIZED VIEW',
    'PACKAGE',
    'PACKAGE BODY',
    'PROCEDURE',
    'TRIGGER',
    'VIEW',
    'FUNCTION'
    UNION
    SELECT UT.TABLE_NAME FROM
    SELECT TABLE_NAME FROM USER_TABLES
    MINUS
    SELECT DISTINCT UPPER(REFERENCED_NAME)
    FROM user_dependencies
    where
    referenced_type='TABLE' and UPPER(NAME) in
    select distinct UPPER(object_name) from user_objects where UPPER(object_type) in
    'MATERIALIZED VIEW',
    'PACKAGE',
    'PACKAGE BODY',
    'PROCEDURE',
    'TRIGGER',
    'VIEW',
    'FUNCTION'
    AND REFERENCED_OWNER=(SELECT sys_context('USERENV', 'CURRENT_SCHEMA') FROM dual)
    ) UT,
    ( SELECT * FROM USER_SOURCE
    WHERE NAME IN
    ( SELECT DISTINCT NAME FROM USER_SOURCE
    WHERE TYPE NOT IN ('TYPE')
    AND
    UPPER(TEXT) LIKE '%EXECUTE IMMEDIATE%'
    ) US
    WHERE
    UPPER(US.TEXT) LIKE '%'||UPPER(UT.TABLE_NAME)||'%'
    AND
    (UPPER(US.TEXT) NOT LIKE '%--%')
    The above SQL Query can list out unused tables by checking the Dynamic SQL Statement also upto some level only.
    Once we extracted the list of unused tables, having a manual check would be also greater to verify as it is should not impact the business applications.
    Regards,
    Subramanian G

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Incremental load into the Dimension table

    Hi,
    I have the problem in doing the incremental load of the dimension table.Before loading into the dimension table,i would like to check the data in the dimnesion table.
    In my dimension table i have one not null surrogate key and the other null dimension tables.The not null surrogate key, i am populating with the Sequence Generator.
    To do the incremental load i have done the following.
    I made lookup into the dimension table and looked for a key.The key from the lookup table i have passed to the expression operator.In the expression operator i have created one field and hard coded one flag based on the key from the lookup table.I passed this flag to the filter operator and rest of the fields from the source.
    By doing this i am not able to pass the new records to the dimension table.
    Can you please help me.
    I have another question also.
    How do i update one not null key in the fact table.
    Thanks
    Vinay

    Hi Mark,
    Thanks for your help to solve my problem.I thought i share more information by giving the sql.
    I am giving below the 2 sqls, i would like to achieve through OWB.
    Both the following tasks need to be accomplished after loading the fact table.
    task1:
    UPDATE fact_table c
    SET c.dimension_table_key =
    (SELECT nvl(dimension_table.dimension_table_key,0)
    FROM src_dimension_table t,
    dimension_table dimension_table
    WHERE c.ssn = t.ssn(+)
    AND c.date_src_key = to_number(t.date_src(+), '99999999')
    AND c.time_src_key = to_number(substr(t.time_src(+), 1, 4), '99999999')
    AND c.wk_src = to_number(concat(t.wk_src_year(+), concat(t.wk_src_month(+), t.wk_src_day(+))), '99999999')
    AND nvl(t.field1, 'Y') = nvl(dimension_table.field1, 'Y')
    AND nvl(t.field2, 'Y') = nvl(dimension_table.field2, 'Y')
    AND nvl(t.field3, 'Y') = nvl(dimension_table.field3, 'Y')
    AND nvl(t.field4, 'Y') = nvl(dimension_table.field4, 'Y')
    AND nvl(t.field5, 'Y') = nvl(dimension_table.field5, 'Y')
    AND nvl(t.field6, 'Y') = nvl(dimension_table.field6, 'Y')
    AND nvl(t.field7, 'Y') = nvl(dimension_table.field7, 'Y')
    AND nvl(t.field8, 'Y') = nvl(dimension_table.field8, 'Y')
    AND nvl(t.field9, 'Y') = nvl(dimension_table.field9, 'Y')
    WHERE c.dimension_table_key = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    source table for the dimension table is src_dimension_table
    dimension_table_key is a not null key in the fact table
    task2:
    update fact_table cf
    set cf.key_1 =
    (select nvl(max(p.key_1),0) from dimension_table p
         where p.field1 = cf.field1
    and p.source='YY')
    where cf.key_1 = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    key_1 is a not null key in the fact table
    Is it possible to achieve the above tasks through Oracle Warehouse builder(OWB).I created the mappings for loading the dimension table and fact table and they are working fine.But the above two queries i am not able to achieve through OWB.I would be thankful if you can help me out.
    Thanks
    Vinay

  • DAC not populating FACT table for the GL module - W_GL_OTHER_F (zero rows)

    All - We did our FULL loads from the Oracle Financials 11i into OBAW and got data in most of the dimension tables populated. HOWEVER, i am not seeing anything populating into the fact table W_GL_OTHER_F (zero rows). Following is a list of all dims / facts i am focusing towards for the GL module.
    W_BUSN_LOCATION_D     (8)
    W_COST_CENTER_D     (124)
    W_CUSTOMER_FIN_PROFL_D     (6,046)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_GL_ACCOUNT_D     (28,721)
    W_INT_ORG_D     (171)
    W_INVENTORY_PRODUCT_D     (395)
    W_LEDGER_D     (8)
    W_ORG_D     (3,364)
    W_PRODUCT_D     (21)
    W_PROFIT_CENTER_D     (23)
    W_SALES_PRODUCT_D     (395)
    W_STATUS_D     (7)
    W_SUPPLIER_D     (6,204)
    W_SUPPLIER_PRODUCT_D     (0)
    W_TREASURY_SYMBOL_D     (0)
    W_GL_OTHER_F <------------------------------------- NO FACT DATA AT ALL
    Question for the group: Are there any specific settings which might be preventing us from getting data loaded into our FACT tables? I was doing research and found the following on the internet:
    Map Oracle General Ledger account numbers to Group Account Numbers using the following file file_group_acct_names_ora.csv. Is this something that is necessary?
    Any help / guidance / pointers are greatly appreciated.
    Regards,

    There are many things to configure before your first full load.
    For the configuartion steps see Oracle Business Intelligence Applications Configuration Guide for Informatica PowerCenter Users
    - http://download.oracle.com/docs/cd/E13697_01/doc/bia.795/e13766.pdf (7951)
    - http://download.oracle.com/docs/cd/E14223_01/bia.796/e14216.pdf (796)
    3 Configuring Common Areas and Dimensions
    3.1 Source-Independent Configuration Steps
    Section 3.1.1, "How to Configure Initial Extract Date"
    Section 3.1.2, "How to Configure Global Currencies"
    Section 3.1.3, "How to Configure Exchange Rate Types"
    Section 3.1.4, "How to Configure Fiscal Calendars"
    3.2 Oracle EBS-Specific Configuration Steps
    Section 3.2.1, "Configuration Required Before a Full Load for Oracle EBS"
    Section 3.2.1.1, "Configuration of Product Hierarchy (Except for GL, HR Modules)"
    Section 3.2.1.2, "How to Assign UNSPSC Codes to Products"
    Section 3.2.1.3, "How to Configure the Master Inventory Organization in Product Dimension Extract for Oracle 11i Adapter (Except for GL & HR Modules)"
    Section 3.2.1.4, "How to Map Oracle GL Natural Accounts to Group Account Numbers"
    Section 3.2.1.5, "How to make corrections to the Group Account Number Configuration"
    Section 3.2.1.6, "About Configuring GL Account Hierarchies"
    Section 3.2.1.7, "How to set up the Geography Dimension for Oracle EBS"
    Section 3.2.2, "Configuration Steps for Controlling Your Data Set for Oracle EBS"
    Section 3.2.2.1, "How to Configure the Country Region and State Region Name"
    Section 3.2.2.2, "How to Configure the State Name"
    Section 3.2.2.3, "How to Configure the Country Name"
    Section 3.2.2.4, "How to Configure the Make-Buy Indicator"
    Section 3.2.2.5, "How to Configure Country Codes"
    5.2 Configuration Required Before a Full Load for Financial Analytics
    Section 5.2.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.2.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.2.2.1, "About Configuring Domain Values and CSV Worksheet Files for Oracle Financial Analytics"
    Section 5.2.2.2, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R12)"
    Section 5.2.2.3, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R11i)"
    Section 5.2.2.4, "How to Specify the Ledger or Set of Books for which GL Data is Extracted"
    5.3 Configuration Steps for Controlling Your Data Set
    Section 5.3.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.3.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.3.2.1, "How GL Balances Are Populated in Oracle EBS"
    Section 5.3.2.2, "How to Configure Oracle Profitability Analytics Transaction Extracts"
    Section 5.3.2.3, "How to Configure Cost Of Goods Extract (for Oracle EBS 11i)"
    Section 5.3.2.4, "How to Configure AP Balance ID for Oracle Payables Analytics"
    Section 5.3.2.5, "How to Configure AR Balance ID for Oracle Receivables Analytics and Oracle General Ledger and Profitability Analytics"
    Section 5.3.2.6, "How to Configure the AR Adjustments Extract for Oracle Receivables Analytics"
    Section 5.3.2.7, "How to Configure the AR Schedules Extract"
    Section 5.3.2.8, "How to Configure the AR Cash Receipt Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.9, "How to Configure the AR Credit-Memo Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.10, "How to Enable Project Analytics Integration with Financial Subject Areas"
    Also, another reason I had was that if you choose not to filter by set of books/type then you must still include the default values set of books id 1 and type NONE in the list on the DAC parameters (this is a consequence of strange logic in the decode statements for the sql in the extract etl).
    I would recomend you run the extract fact sql prior to running your execution plan to sanity check how many rows you expect to get. For example, to see how many journal lines, use informatica designer to view mapplet SDE_ORA11510_Adaptor.mplt_BC_ORA_GLXactsJournalsExtract.2.SQ_GL_JE_LINES.3 in mapping SDE_ORA11510_Adaptor.SDE_ORA_GLJournals then cut paste the SQL to run your favourite tool such as sqldeveloper. You will have to fill in the $$ variables yourself.

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

  • Help on  Setting logical Levels  in Fact tables and on Dimension tables

    Hi all
    Can any body provide any blogs or any king of material on what exactly is levelling .
    Like after creating the Dimensional hierarchies we need to set the logical levels for the LTS of fact tabels ri8 .So what is the difference between setting logical levels to fact tabels and also Setting levelling on Dimension tables .
    Any kind of help is appreciated
    Thanks
    Xavier.
    Edited by: Xavier on Aug 4, 2011 10:50 AM

    I have read these blogs ,but what my question is
    Setting the logical levels in LTS of Fact tables i understood .
    But we can also set the logical levels for dimensions also ri8 .I didn't understand why do we set the logical levels for dimensions .Is there any reason why we go with the levelling at dimensions
    Thanks
    Xavier
    Edited by: Xavier on Aug 4, 2011 2:03 PM
    Edited by: Xavier on Aug 4, 2011 2:32 PM

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • How to show current date in a column of dimension table in ssas

    Hi,
    I have 2 dimension tables (ALLWORK and YOURWORK) and 1 measure table (STATS). In ALLWORK dim table, I have 4 columns, one of them is END_DATE. Based on END_DATE and current date I want to create a named calculation field. To achieve this, I
    want to show the current date in 5th column (of ALLWORK dim table) as calculated member (as named calculation wont show me correct date).
    Problem is, I am not able to create the column in the dimension table. While creating the calculated member, I am not able to select the ALLWORK dimension table, it always selects MEASURE by default. Please help.
    thanks
    Tarique
    thanks and regards Tarique Aslam

    Hi Tarique,
    According to your description, you want to add a column to your dimension to show the current date, right?
    In data source view, we can add named calculation to your table. A named calculation is a SQL expression represented as a calculated column. This expression appears and behaves as a column in the table. A named calculation lets you extend the relational
    schema of existing tables or views in a data source view without modifying the tables or views in the underlying data source.
    Reference
    http://msdn.microsoft.com/en-in/library/ms174859.aspx
    http://devmau5.wordpress.com/2010/03/25/the-getdate-of-mdx/
    If I have anything misunderstand, please point it out.
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for