Help to design Star Schema facts and dimensions

Hi
I have the following summarized table that I need to convert to the star schema.
I am totally confused how to create the dimensions. Any help to start with this will be highly appreciated.
I am in an urgent need of it.
Month
AccountId
Cust_id
City
State
Zip
Zip4
AccountMonthsOnBooks
APR
APY
AccountType
OriginationChannel
RewardType
BankProductType
BranchId
CycleEndDate
CycleStartDate
DaysInCycle
DaysInStatement
DirectDepositLastUseDate
FlagElectronicBilling
FlagOverdrawnThisMonth
FlagOverdrawnThisCycle
FlagPaidOverdraft
NumberofTimesOverdraft
OnlineBankingEnrolledIndicator
OnlineBankingLastUseDate
WebLoginCount
OpenDate
OverdraftAmount
OverdraftCount
OverdraftLastDate
OverdraftProtectionIndicator
OverDraftProtectionSourceType
PaidOverdraftAmount
PaidOverdraftCount
PopulationClassIndicator
ProductId
RewardAccruedAmount
RewardBalance
RewardCurrency
RewardRedeemedAmount
RewardsDateEnrolled
RewardsExpirationAmount
RewardTypeBank
RiskScore
RiskScorePlugged
StatementEndDate
StatementStartDate
SubProductId
BalanceADBAmount
BalanceADBNonSweepNonZBAAmount
CycleBeginningBalance
CycleEndingBalance
MonthBeginningBalance
MonthEndingBalance
StatementBeginningBalance
StatementEndingBalance
NonSystemCreditTotalAmount
NonSystemCreditTotalCount
NonSystemDebitTotalAmount
NonSystemDebitTotalCount
ACHCreditTotalAmount
ACHCreditTotalCount
ACHDebitTotalAmount
ACHDebitTotalCount
ACHPPDBillPayCreditAmount
ACHPPDBillPayCreditCount
ACHPPDBillPayDebitAmount
ACHPPDBillPayDebitCount
ACHPPDPayrollDirectDepositCreditAmount
ACHPPDPayrollDirectDepositCreditCount
ACHPPDPayrollDirectDepositDebitAmount
ACHPPDPayrollDirectDepositDebitCount
ATMCreditAmount
ATMCreditCount
ATMDebitAmount
ATMDebitCount
ATMOffUsCashDepositAmount
ATMOffUsCashDepositCount
ATMOffUsCashWithdrawalAmount
ATMOffUsCashWithdrawalCount
ATMOffUsCheckDepositAmount
ATMOffUsCheckDepositCount
ATMOffUsTransferCreditAmount
ATMOffUsTransferCreditCount
ATMOffUsTransferDebitAmount
ATMOffUsTransferDebitCount
ATMOnUsCashDepositAmount
ATMOnUsCashDepositCount
ATMOnUsCashWithdrawalAmount
ATMOnUsCashWithdrawalCount
ATMOnUsCheckDepositAmount
ATMOnUsCheckDepositCount
ATMOnUsTransferCreditAmount
ATMOnUsTransferCreditCount
ATMOnUsTransferDebitAmount
ATMOnUsTransferDebitCount
BranchCheckDepositAmount
BranchCheckDepositCount
BranchCheckWithdrawalAmount
BranchCheckWithdrawalCount
BranchCreditTotalAmount
BranchCreditTotalCount
BranchDepositAmount
BranchDepositCount
BranchDebitTotalAmount
BranchDebitTotalCount
BranchMiscellaneousCreditAmount
BranchMiscellaneousCreditCount
BranchMiscellaneousDebitAmount
BranchMiscellaneousDebitCount
BranchWithdrawalAmount
BranchWithdrawalCount
BranchTransferCreditAmount
BranchTransferCreditCount
BranchTransferDebitAmount
BranchTransferDebitCount
ChargeOffDebitAmount
ChargeOffDebitCount
CheckCausingOverdraftCreditAmount
CheckCausingOverdraftCreditCount
CheckCausingOverdraftDebitAmount
CheckCausingOverdraftDebitCount
CheckCreditTotalAmount
CheckCreditTotalCount
CheckDebitTotalAmount
CheckDebitTotalCount
CheckReturnedCreditAmount
CheckReturnedCreditCount
CheckReturnedDebitAmount
CheckReturnedDebitCount
CheckStopPaymentCreditAmount
CheckStopPaymentCreditCount
CheckStopPaymentDebitAmount
CheckStopPaymentDebitCount
CreditTotalAmount
CreditTotalCount
DebitTotalAmount
DebitTotalCount
FeeACHBillPayAmount
FeeACHBillPayCount
FeeAnnualAmortizedAmount
FeeAnnualAmount
FeeAnnualCount
FeeATMOffUsAmount
FeeATMOffUsBalanceInquiryAmount
FeeATMOffUsBalanceInquiryCount
FeeATMOffUsCount
FeeATMOffUsTransferAmount
FeeATMOffUsTransferCount
FeeATMOnUsCheckInquiryAmount
FeeATMOnUsCheckInquiryCount
FeeDormantAmount
FeeDormantCount
FeeEarlyWithdrawalAmount
FeeEarlyWithdrawalCount
FeeForeignTransactionAmount
FeeForeignTransactionCount
FeeMonthlyAmount
FeeMonthlyCount
FeeNSFAmount
FeeNSFCount
FeeODPTransferAmount
FeeODPTransferCount
FeeOtherAmount
FeeOtherCount
FeeOverdraftAmount
FeeExtendedOverdraftAmount
FeeOverdraftCount
FeeExtendedOverdraftCount
FeePOSPINPurchaseAmount
FeePOSPINPurchaseCount
FeeReturnedCheckAmount
FeeReturnedCheckCount
FeeStopPaymentAmount
FeeStopPaymentCount
FeeTotalAmount
FeeTotalATMCreditAmount
FeeTotalATMCreditCount
FeeTotalATMDebitAmount
FeeTotalATMDebitCount
FeeTotalNonPenaltyAmount
FeeTotalNonPenaltyCount
FeeTotalPenaltyAmount
FeeTotalPenaltyCount
FeeWaiverACHBillPayAmount
FeeWaiverAnnualAmount
FeeWaiverATMOffUsAmount
FeeWaiverATMOffUsBalanceInquiryAmount
FeeWaiverDormantAmount
FeeWaiverForeignTransactionAmount
FeeWaiverMonthlyAmount
FeeWaiverNSFAmount
FeeWaiverODPTransferAmount
FeeWaiverOtherAmount
FeeWaiverOverdraftAmount
FeeWaiverExtendedOverdraftAmount
FeeWaiverStopPaymentAmount
FeeWaiverTotalAmount
FeeWaiverTotalNonPenaltyAmount
FeeWaiverTotalPenaltyAmount
FeeWaiverWireTransferAmount
FeeWireTransferAmount
FeeWireTransferCount
FraudTransactionAmount
FraudTransactionCount
InterestPaymentCreditAmount
InterestPaymentDebitAmount
InternetTransferCreditAmount
InternetTransferCreditCount
InternetTransferDebitAmount
InternetTransferDebitCount
ODPTransferCreditAmount
ODPTransferCreditCount
ODPTransferDebitAmount
ODPTransferDebitCount
OtherCreditAmount
OtherCreditCount
OtherCreditReversalAmount
OtherDebitAmount
OtherDebitCount
OtherDebitReversalAmount
OtherTransferCreditAmount
OtherTransferCreditCount
OtherTransferDebitAmount
OtherTransferDebitCount
PhoneTransferCreditAmount
PhoneTransferCreditCount
PhoneTransferDebitAmount
PhoneTransferDebitCount
POSSIGForeignCreditAmount
POSSIGForeignCreditCount
POSSIGForeignDebitAmount
POSSIGForeignDebitCount
POSPINCreditTotalAmount
POSPINCreditTotalCount
POSPINDebitTotalAmount
POSPINDebitTotalCount
POSSIGCreditTotalAmount
POSSIGCreditTotalCount
POSSIGDebitTotalAmount
POSSIGDebitTotalCount
ReversalFeeACHBillPayAmount
ReversalFeeEarlyWithdrawalAmount
ReversalFeeForeignTransactionAmount
ReversalFeeMonthlyAmount
ReversalFeeNSFAmount
ReversalFeeODPTransferAmount
ReversalFeeOtherAmount
ReversalFeeOverdraftAmount
ReversalFeeExtendedOverdraftAmount
ReversalFeePOSPINPurchaseAmount
ReversalFeeReturnedCheckAmount
ReversalFeeStopPaymentAmount
ReversalFeeTotalAmount
ReversalFeeTotalNonPenaltyAmount
ReversalFeeTotalPenaltyAmount
ReversalFeeWireTransferAmount
SubstituteCheckDepositAmount
SubstituteCheckDepositCount
SubstituteCheckWithdrawalAmount
SubstituteCheckWithdrawalCount
SystemCreditTotalAmount
SystemCreditTotalCount
SystemDebitTotalAmount
SystemDebitTotalCount
TargetSweepCreditAmount
TargetSweepCreditCount
TargetSweepDebitAmount
TargetSweepDebitCount
WaiverFeeReturnedCheckCount
WireTransferFedCreditAmount
WireTransferFedCreditCount
WireTransferFedDebitAmount
WireTransferFedDebitCount
WireTransferInternalCreditAmount
WireTransferInternalCreditCount
WireTransferInternalDebitAmount
WireTransferInternalDebitCount
WireTransferInternationalCreditAmount
WireTransferInternationalCreditCount
WireTransferInternationalDebitAmount
WireTransferInternationalDebitCount
ZBACreditAmount
ZBACreditCount
ZBADebitAmount
ZBADebitCount
AllocatedEquityAmount
CapitalCharge
ContraExpenseCreditLossRecoveryAmount
ContraExpenseFraudRecoveryAmount
EquityCreditAmount
ExpenseCorporateTaxAmount
ExpenseCreditLossWriteOffAmount
ExpenseDepositInsuranceAmount
ExpenseFraudWriteOffAmount
ExpenseMarketingAmount
ExpenseNetworkFeesAmount
ExpenseOperatingAmount
ExpenseOriginationAmortizedAmount
ExpenseOtherAmount
ExpenseTotalAmount
RevenueInterchangeTotalAmount
RevenuePINInterchangeAmount
RevenueSIGInterchangeAmount
RevenueInterestAmount
RevenueOtherAmount
RevenueNetIncomeAmount
RevenuePreTaxIncomeAmount
RevenueTotalAmount
StatusCode
ClosedCode
ClosedDate

user13407709 wrote:
I have the following summarized table that I need to convert to the star schema.
I am totally confused how to create the dimensions. Any help to start with this will be highly appreciated. The first step should be to tell whoever gave you this task that you have no idea what you are doing and set their expectations accordingly.
I am in an urgent need of it.Then it would no longer be urgent and you would have the time to read and learn this.
http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/toc.htm

Similar Messages

  • How to design fact and dimension tables

    Hi Team,
    Please provide me some useful links!!
    I have a Parent & Child table each has around 8 Million records (1 to many relation).  These two tables need relation with many other master tables to generate any report. 
    I want to desing a fact and dimension(s) for this parent & Child tables.  
    Please help me what are the things I have to keep in mind when desingning datawarehouse fact and dimension tables?
    Thanks,
    Ali

    Hi, any update on this?

  • How to design many to many relationship in the fact and dimension

    There is a problem in my project what is the subject.And i wanna know how to implement in owb.I use the warehouse builder 10. Thanks.

    You may design and load whatever db model you want to.
    But If you set a unique key, you may find some integrity issues. I wouldn't do a many to many relationship between facts and dimensions. This could cause you lots of headaches when users start to submit queries using this tables. You'll probably face performance issues.
    Regards,
    Marcos

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Table as fact and dimension

    Hi,
    Can one table act as a fact in one subject area and act as a dimension in another subject area? Thanks.

    Hi
    I confirm Stijn Gabriels' post.
    You don't have to do an alias in your physical table, otherwise the request will generate an alias in SQL for nothing ! However, in your logical layer, you will create 2 logical table : one for the fact, one for dimension. Both of them will have the same source : your unique physical table.
    Let's take an example : suppose you have only 2 table in your datawarehouse : 1 fact table with degenerate dimension attributes (so a table with fact and dimension data), we'll call it "revenue", and 1 dimension table for... "Time", for example. We'll call it"Time"
    Your conceptual (on paper) is a star schema with 1 fact table (revenue_fact), and 2 dimension table (time, and revenue_carac).
    In your OBIEE physical layer :
    - you import the 2 tables "revenue" and "time" from your database.
    - you link "revenue" with "time"
    In your OBIEE logical layer :
    - you create a logical table called "Dim Time", based on the "Time" physical table and you do what you want with it (hierarchy...)
    - you create a logical table called "Dim Revenue Carac", based on the "revenue" physical table, and you do what you want with attributes
    - you create a logical table called "Fact revenue", based on the "revenue" physical table, and you do you what you want with measures and aggregation
    - you link the 2 logical dimension table with the logical fact table
    And that's all. Now, let's see which kind of SQL OBIEE will generate if you want to display the measure "revenue" with the attribute "revenue_carac" and the attribute "year".
    Select Sum(R.revenue_measure) , R.revenue_carac , T.year
    From revenue R , time T
    Where R.time_id = T.id
    Group by R.revenue_carac , T.year
    If you set alias in your physical layer, the request will be that (and you don't want it) :
    Select Sum(R1.revenue_measure) , R2.revenue_carac , T.year
    From revenue R1, revenue R2 , time T
    Where R1.time_id = T.id
    And R1.id = R2.id
    Group by R2.revenue_carac , T.year
    same results, but useless join between the same physical table

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

  • Delete Fact and Dimension Tables

    Hai All,
              Small question! When deleting data from info cube, it asks whether to delete just Dimension table or both fact and Dimension tables.
    What is the difference between these two?
    Thank you.

    Hi Visu,
    The dimension table will contain the references to char value by way of SID and DIM ids, whereas the fact table will contain the number (of key figures). If you delete just the Fact table, the numbers go away, yet the DIM ids remain intact, so reload is faster. But if you are convinced that this data is not correct either, or you want a clean slate to strat reload again, then delete both Fact and Dim tables. Then when you reload, the DIM ids will be created again.
    Hope this helps...

  • Unable to Create Fact and Dimension tables from the Tools menu in EIS conso

    Hi All,
    In the EIS console, I am unable to create the fact table and the dimension tables to produce my OLAP model from the TOOLS menu whereas I am able to create them by dragging from the left panel where the tables are displayed. I am geeting the below error message:
    "An exception occured while retrieving OLAP model metadata. Please verify you are connected to the catalog and try again"
    Any help appreciated.
    Thanks,
    Raja

    I have fact and dimension tables in one server and i moved those stuff to another server.If I do that what happen to essbase and about totale EPM System.I want to know how to do this and what I can expect?
    Please reply

  • How to have fact and dimension which are not joined in same report

    Hi All,
    suppose we have 2 dimensions and 1 of them is joined to fact but we want to pull both these dimensions along with fact and fact column shows null but we dont want that .how can we have 2nd dimension which is not joined to fact in the same report .Actually i have a column called equipment schedueled time and it can be shown only with equipment dimension but I want to pull both equipment,geography and equipment scheduled time in report if I do it now it shows equipment scheduled time as null because its not joined with geography dimension.I want even if the geography dimension exists it should show the value of equipment scheduled time .
    Can someone please help me.
    Thanks in Advance

    Hi ,
    Can you please tell me what are fact and dimension tables of your requirement.If you want display the data from 2 dims and a fact table ,you have to establish the joins between facts and dimensions.
    mark if helpful/correct...
    thanks,
    prassu

  • Join multiple facts and dimensions for reporting issue

    Hi,
    I have a report requirements and need to use multiple fact tables and unconformed dimensions as described below
    Fact table: F1,F2,F3
    Dimensions tables: D1.....D9
    F1:(joined to) D1,D2,D3,D4
    F2::(joined to)D1,D2,D5,D6
    F3::(joined to)D1,D2,D7,D8
    D7::(joined to)D9,D8 (dimension D7 joined to two other dimensions D9 and D8
    I'm trying to use columns from almost all the fact and dimension tables but getting "Unable to navigate requested expression. Please fix the metadata consistency warnings."
    Repository is consistent and no errors and warnings.
    How can I configure the repository to develop reports using all fact tables and dimensions?
    Appreciate for your help.
    Thanks
    Jay.

    You can find the same issue in
    Logic of queries in OBIEE
    or
    Connection between 2 fact tables
    Follow the link
    http://there-n-back-again.blogspot.com/2009/04/oracle-analytics-how-to-report-over-non.html
    Put 'All' or 'Total' at levels for dim_xxx for those facts you need in your report from facta and factb
    Regards
    Nicolae

  • Left outer join on Fact and dimension table.

    Hi all, I have a fact F with account number and few measures as columns.
    I also have a dimension D with account number, account name columns.
    Few account numbers from Fact doesnt exist in Dimension D , but they need to show up in the report though.
    How do I left join Fact and Dimension D in RPD?
    I have this report where I need to show Account Number, Account name, measures.
    If D doesnt have certain account numbers, I need to convert that account number from F as string and show it in the report in account name column.
    Can you pls help.

    Ok. I tried this:
    Driving table : Fact, Left outer join -- didnt work.
    Driving table: Dimension D left outer join -- didnt work either
    In either the case, I see physical query as D left outer Join on Fact F. and omitting the rows.
    And then I tried this -
    Driving table: Fact, RIght outer join.
    Now, this is giving me error:
    Sybase][ODBC Driver]Internal Error. [nQSError: 16001] ODBC error state: 00000 code: 30128 message: [Sybase][ODBC Driver]Data overflow. Increase specified column size or buffer size. [nQSError: 16011] ODBC error occurred while executing SQLExtendedFetch to retrieve the results of a SQL statement. (HY000)
    I checked all columns, everything matched with database table type and size.
    I am pulling Fact.account number, Dimension.account name, Fact.Measures. I am seeing this error each time I pull Fact.Account number.

  • Fact and Dimension tables

    Hi all,
    Please let me know the relation between the fact and dimension table(s).
    regards
    chandra kanth.

    Hi Chandra, it would be useful if you read a little bit about BI before start working with a BI tool.
    Anyway, I'll try to help you.
    A dimension is a structure that will be join to the fact table to so that users can analyze data from different point of views and levels. The fact table will most probably (not always) have data at the lowest possible level. So, let's suppose you have SALES data for different cities in a country (USA). Your fact table will be:
    CITY --- SalesValue
    Miami --- 100
    NYC --- 145
    Los Angeles --- 135 (because Arnold is not managing the State very well ;-) )
    You will then can have a Dimension table with the "Country" structure. This is, a table containing the different cities along with their states, counties, and finally the country. So your dimension table would look like:
    NATION --- STATE --- COUNTY --- CITY
    USA --- Florida --- Miami --- Miami
    USA --- NY ---- NY --- NYC
    USA --- Los Angeles --- LA --- Los Angeles
    This dimension will allow you to aggregate the data at dffirent lavels. This is, the user will not only be able to see data at the lowest level, but also at the County, State and Country level.
    You will join your fact table (field CITY) with your dimension table (field CITY). The tool will then help you with the aggregation of the values.
    Ihope this helps.
    J.-

  • Fact and Dimension vs Normalization

    I have a huge chunk of table with some columns and massive amount of data.I want to automate the entire system hence I need to modify the arrangement of the table.I have 2 options: 1) Arrange it into fact and dimension tables. and 2) Normalize the table,which already is in 2NF,to 3NF.
    Need help in getting to know which of the options is better and some case studies supporting the decision. Thank You...

    I would say it is a matter of concept. Mainly depending upon how the data is modified.
    In an OLTP environment you would use 3rd NF.
    In an DWH environment you would use a denormalized dimension table (cube) + fact tables.
    Many single rows updates, parallel user DML = OLTP
    Large bulk inserts/Deletes, (almost) no updates, mostly select access = DWH

  • Need a document about how to move the fact and dimension table's to different server's

    Hello Experts,
    I need a detailed doc on how to move the fact and dimension tables to different server's.Please help me out from this
           Thanks in advance....

    You still haven't told anyone what products besides Essbase you are using, without which this is an impossible question to answer.
    https://forums.oracle.com/thread/2585515
    https://forums.oracle.com/thread/2585171
    Are you connecting to these tables from Essbase with a load rule / ODBC?  Using Studio?  Using Integration Services?  Any Drill-Through reporting set up?
    This may sound harsh, but if you truly don't know how to answer any of these questions you should probably not be anywhere near this task...

  • Many to many relationships between Fact and Dimension

    Hi All,
    I have to solve two kind of many to many relationships between Fact and Dimension.
    1. Many to Many relationship between Fact and Dimension, using a bridge table, with FKs in fact and dimension to bridge table’s PK.
    2. Many to Many relationship between Fact and Dimension, using a bridge table, and an intersection table, with FK in fact to bridge table, and 2 FKs in intersection table to Dimension and to bridge table.
    I need help on implementing (how the mapping has to look like) them in OWB 9.2.0.2.8.
    Thanks,
    Aurelian Cojocaru

    Aurelian,
    Unfortunately, you cannot implement this in the dimensional model. You would have to use relational tables and relationships between those in order to implement your scenario.
    Thanks,
    Mark.

Maybe you are looking for

  • ORA-01403: no data found in basic select into command

    Hi All, I have an interesting issue which is driving me crazy. I have a rather large procedure which I am utilizing in my apex environment which is returning the error: ORA-01403: no data found So i thought I would cut down the procedure and I have b

  • Images not saving in iPad app

    Whenever I save a screenshot or an image from internet it doesn't appear in the photos app. I am also not able to take photos on camera? It is kind of strange. New iPad and hasn't been  thrown around or anything. Thanks for your help.

  • Dynamic string

    Hi, I need to extract values from string to variables as below. declare str varchar2(100):='Acknowledgment=1234,Order Requester=5678,Site Contact=9999,Other Contact=1456,Pre=1234,23445,56767'; l_a varchar2(100); l_or  varchar2(100); l_s  varchar2(100

  • Server Backup fails on HyperV Host - VSS Operation Failed

    Hi, I have a Server 2008 Enterprise R2 Host, which holds four virtual machines. I have setup to backup nightly to an external USB drive. Backups had been going fine for months until recently. I now get this within 5 minutes of the backup starting, an

  • Removing control from panel

    I have a panel which dynamically get filled with controls, but I need to reuse that panel for a new set. is there a way to clear its contents so that the new set can be put there?