Star Schema Image-Urgent
Hi All
Can any1 tell me from where in the BI system can i save a .jpeg image of a Info Cube Starschema.
I have been doin thru Metadata repository, I can see the star schema tof my cube there but when i copy that to a word file, most of the dimension parts are missing,, so im nt able to copy the complete star schema, I cant reduce the Zoom also, So is there any way to save it.
Is there any Such Export Button in the metadata repository Window.
Points are assured.
Thanks
There is option to export the html document for the object being displayed. Menu Extras-->Html Export, the export option is also available with the context menu of the object. Try this if you need for a specific object only..
Helpful Links:
[http://help.sap.com/saphelp_nw70/helpdata/EN/4e/ea683c25e9096de10000000a114084/frameset.htm]
[http://help.sap.com/saphelp_nw70/helpdata/EN/4e/ea683c25e9096de10000000a114084/frameset.htm]
Hope this helps.
Cheers,
Sumit
Similar Messages
-
Help to design Star Schema facts and dimensions
Hi
I have the following summarized table that I need to convert to the star schema.
I am totally confused how to create the dimensions. Any help to start with this will be highly appreciated.
I am in an urgent need of it.
Month
AccountId
Cust_id
City
State
Zip
Zip4
AccountMonthsOnBooks
APR
APY
AccountType
OriginationChannel
RewardType
BankProductType
BranchId
CycleEndDate
CycleStartDate
DaysInCycle
DaysInStatement
DirectDepositLastUseDate
FlagElectronicBilling
FlagOverdrawnThisMonth
FlagOverdrawnThisCycle
FlagPaidOverdraft
NumberofTimesOverdraft
OnlineBankingEnrolledIndicator
OnlineBankingLastUseDate
WebLoginCount
OpenDate
OverdraftAmount
OverdraftCount
OverdraftLastDate
OverdraftProtectionIndicator
OverDraftProtectionSourceType
PaidOverdraftAmount
PaidOverdraftCount
PopulationClassIndicator
ProductId
RewardAccruedAmount
RewardBalance
RewardCurrency
RewardRedeemedAmount
RewardsDateEnrolled
RewardsExpirationAmount
RewardTypeBank
RiskScore
RiskScorePlugged
StatementEndDate
StatementStartDate
SubProductId
BalanceADBAmount
BalanceADBNonSweepNonZBAAmount
CycleBeginningBalance
CycleEndingBalance
MonthBeginningBalance
MonthEndingBalance
StatementBeginningBalance
StatementEndingBalance
NonSystemCreditTotalAmount
NonSystemCreditTotalCount
NonSystemDebitTotalAmount
NonSystemDebitTotalCount
ACHCreditTotalAmount
ACHCreditTotalCount
ACHDebitTotalAmount
ACHDebitTotalCount
ACHPPDBillPayCreditAmount
ACHPPDBillPayCreditCount
ACHPPDBillPayDebitAmount
ACHPPDBillPayDebitCount
ACHPPDPayrollDirectDepositCreditAmount
ACHPPDPayrollDirectDepositCreditCount
ACHPPDPayrollDirectDepositDebitAmount
ACHPPDPayrollDirectDepositDebitCount
ATMCreditAmount
ATMCreditCount
ATMDebitAmount
ATMDebitCount
ATMOffUsCashDepositAmount
ATMOffUsCashDepositCount
ATMOffUsCashWithdrawalAmount
ATMOffUsCashWithdrawalCount
ATMOffUsCheckDepositAmount
ATMOffUsCheckDepositCount
ATMOffUsTransferCreditAmount
ATMOffUsTransferCreditCount
ATMOffUsTransferDebitAmount
ATMOffUsTransferDebitCount
ATMOnUsCashDepositAmount
ATMOnUsCashDepositCount
ATMOnUsCashWithdrawalAmount
ATMOnUsCashWithdrawalCount
ATMOnUsCheckDepositAmount
ATMOnUsCheckDepositCount
ATMOnUsTransferCreditAmount
ATMOnUsTransferCreditCount
ATMOnUsTransferDebitAmount
ATMOnUsTransferDebitCount
BranchCheckDepositAmount
BranchCheckDepositCount
BranchCheckWithdrawalAmount
BranchCheckWithdrawalCount
BranchCreditTotalAmount
BranchCreditTotalCount
BranchDepositAmount
BranchDepositCount
BranchDebitTotalAmount
BranchDebitTotalCount
BranchMiscellaneousCreditAmount
BranchMiscellaneousCreditCount
BranchMiscellaneousDebitAmount
BranchMiscellaneousDebitCount
BranchWithdrawalAmount
BranchWithdrawalCount
BranchTransferCreditAmount
BranchTransferCreditCount
BranchTransferDebitAmount
BranchTransferDebitCount
ChargeOffDebitAmount
ChargeOffDebitCount
CheckCausingOverdraftCreditAmount
CheckCausingOverdraftCreditCount
CheckCausingOverdraftDebitAmount
CheckCausingOverdraftDebitCount
CheckCreditTotalAmount
CheckCreditTotalCount
CheckDebitTotalAmount
CheckDebitTotalCount
CheckReturnedCreditAmount
CheckReturnedCreditCount
CheckReturnedDebitAmount
CheckReturnedDebitCount
CheckStopPaymentCreditAmount
CheckStopPaymentCreditCount
CheckStopPaymentDebitAmount
CheckStopPaymentDebitCount
CreditTotalAmount
CreditTotalCount
DebitTotalAmount
DebitTotalCount
FeeACHBillPayAmount
FeeACHBillPayCount
FeeAnnualAmortizedAmount
FeeAnnualAmount
FeeAnnualCount
FeeATMOffUsAmount
FeeATMOffUsBalanceInquiryAmount
FeeATMOffUsBalanceInquiryCount
FeeATMOffUsCount
FeeATMOffUsTransferAmount
FeeATMOffUsTransferCount
FeeATMOnUsCheckInquiryAmount
FeeATMOnUsCheckInquiryCount
FeeDormantAmount
FeeDormantCount
FeeEarlyWithdrawalAmount
FeeEarlyWithdrawalCount
FeeForeignTransactionAmount
FeeForeignTransactionCount
FeeMonthlyAmount
FeeMonthlyCount
FeeNSFAmount
FeeNSFCount
FeeODPTransferAmount
FeeODPTransferCount
FeeOtherAmount
FeeOtherCount
FeeOverdraftAmount
FeeExtendedOverdraftAmount
FeeOverdraftCount
FeeExtendedOverdraftCount
FeePOSPINPurchaseAmount
FeePOSPINPurchaseCount
FeeReturnedCheckAmount
FeeReturnedCheckCount
FeeStopPaymentAmount
FeeStopPaymentCount
FeeTotalAmount
FeeTotalATMCreditAmount
FeeTotalATMCreditCount
FeeTotalATMDebitAmount
FeeTotalATMDebitCount
FeeTotalNonPenaltyAmount
FeeTotalNonPenaltyCount
FeeTotalPenaltyAmount
FeeTotalPenaltyCount
FeeWaiverACHBillPayAmount
FeeWaiverAnnualAmount
FeeWaiverATMOffUsAmount
FeeWaiverATMOffUsBalanceInquiryAmount
FeeWaiverDormantAmount
FeeWaiverForeignTransactionAmount
FeeWaiverMonthlyAmount
FeeWaiverNSFAmount
FeeWaiverODPTransferAmount
FeeWaiverOtherAmount
FeeWaiverOverdraftAmount
FeeWaiverExtendedOverdraftAmount
FeeWaiverStopPaymentAmount
FeeWaiverTotalAmount
FeeWaiverTotalNonPenaltyAmount
FeeWaiverTotalPenaltyAmount
FeeWaiverWireTransferAmount
FeeWireTransferAmount
FeeWireTransferCount
FraudTransactionAmount
FraudTransactionCount
InterestPaymentCreditAmount
InterestPaymentDebitAmount
InternetTransferCreditAmount
InternetTransferCreditCount
InternetTransferDebitAmount
InternetTransferDebitCount
ODPTransferCreditAmount
ODPTransferCreditCount
ODPTransferDebitAmount
ODPTransferDebitCount
OtherCreditAmount
OtherCreditCount
OtherCreditReversalAmount
OtherDebitAmount
OtherDebitCount
OtherDebitReversalAmount
OtherTransferCreditAmount
OtherTransferCreditCount
OtherTransferDebitAmount
OtherTransferDebitCount
PhoneTransferCreditAmount
PhoneTransferCreditCount
PhoneTransferDebitAmount
PhoneTransferDebitCount
POSSIGForeignCreditAmount
POSSIGForeignCreditCount
POSSIGForeignDebitAmount
POSSIGForeignDebitCount
POSPINCreditTotalAmount
POSPINCreditTotalCount
POSPINDebitTotalAmount
POSPINDebitTotalCount
POSSIGCreditTotalAmount
POSSIGCreditTotalCount
POSSIGDebitTotalAmount
POSSIGDebitTotalCount
ReversalFeeACHBillPayAmount
ReversalFeeEarlyWithdrawalAmount
ReversalFeeForeignTransactionAmount
ReversalFeeMonthlyAmount
ReversalFeeNSFAmount
ReversalFeeODPTransferAmount
ReversalFeeOtherAmount
ReversalFeeOverdraftAmount
ReversalFeeExtendedOverdraftAmount
ReversalFeePOSPINPurchaseAmount
ReversalFeeReturnedCheckAmount
ReversalFeeStopPaymentAmount
ReversalFeeTotalAmount
ReversalFeeTotalNonPenaltyAmount
ReversalFeeTotalPenaltyAmount
ReversalFeeWireTransferAmount
SubstituteCheckDepositAmount
SubstituteCheckDepositCount
SubstituteCheckWithdrawalAmount
SubstituteCheckWithdrawalCount
SystemCreditTotalAmount
SystemCreditTotalCount
SystemDebitTotalAmount
SystemDebitTotalCount
TargetSweepCreditAmount
TargetSweepCreditCount
TargetSweepDebitAmount
TargetSweepDebitCount
WaiverFeeReturnedCheckCount
WireTransferFedCreditAmount
WireTransferFedCreditCount
WireTransferFedDebitAmount
WireTransferFedDebitCount
WireTransferInternalCreditAmount
WireTransferInternalCreditCount
WireTransferInternalDebitAmount
WireTransferInternalDebitCount
WireTransferInternationalCreditAmount
WireTransferInternationalCreditCount
WireTransferInternationalDebitAmount
WireTransferInternationalDebitCount
ZBACreditAmount
ZBACreditCount
ZBADebitAmount
ZBADebitCount
AllocatedEquityAmount
CapitalCharge
ContraExpenseCreditLossRecoveryAmount
ContraExpenseFraudRecoveryAmount
EquityCreditAmount
ExpenseCorporateTaxAmount
ExpenseCreditLossWriteOffAmount
ExpenseDepositInsuranceAmount
ExpenseFraudWriteOffAmount
ExpenseMarketingAmount
ExpenseNetworkFeesAmount
ExpenseOperatingAmount
ExpenseOriginationAmortizedAmount
ExpenseOtherAmount
ExpenseTotalAmount
RevenueInterchangeTotalAmount
RevenuePINInterchangeAmount
RevenueSIGInterchangeAmount
RevenueInterestAmount
RevenueOtherAmount
RevenueNetIncomeAmount
RevenuePreTaxIncomeAmount
RevenueTotalAmount
StatusCode
ClosedCode
ClosedDateuser13407709 wrote:
I have the following summarized table that I need to convert to the star schema.
I am totally confused how to create the dimensions. Any help to start with this will be highly appreciated. The first step should be to tell whoever gave you this task that you have no idea what you are doing and set their expectations accordingly.
I am in an urgent need of it.Then it would no longer be urgent and you would have the time to read and learn this.
http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/toc.htm -
Using two facts of two different star schemas and conformed dimensions
Hi,
I've been working as developer and database designer for years and I'm new to Business Objects. Some people says you can not use two facts of two different star schemas in the same query because of conformed dimensions and loop problems in BO.
For example I have a CUSTOMER_SALE_fACT table containing customer_id and date_id as FK, and some other business metrics about sales. And there is another fact table CUSTOMER_CAMPAIGN_FACT which also contains customer_id and date_id as FK, and some other business metrics about customer campaigns. SO I have two stars like below:
DIM_TIME -- SALE_FACT -- DIM_CUSTOMER
DIM_TIME -- CAMPAIGN_FACT -- DIM_CUSTOMER
Business metrics are loaded into fact tables and facts can be used together along conformed dimensions . This is one of the fundamentals of the dimensional modeling. Is it really impossible to use SALE_FACT and CAMPAIGN_FACT together? If the answer is No, what is the solution?
Saying "you cannot do that because of loops" is very interesting.
Thank you..When you join two facts together with a common dimension you have created what is called a "chasm trap" which leads to invalid results because of the way SQL is processed. The query rows are first retrieved and then aggregated. Since sales fact and campaign fact have no direct relationship, the rows coming from either side can end up as a product join.
Suppose a customer has 3 sales fact rows and 2 campaign fact rows. The result set will have six rows before any aggregation is performed. That would mean that sales measures are doubled and campaign measures are tripled.
You can report on them together, using multiple SQL passes, but you can't query them together. Does that distinction make sense? -
Injecting data into a star schema from a flat staging table
I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
I plan to use PL/SQL but I'm not sure on the best approach.
The staging table data will be loaded every 10 minutes and will contain about 300 updates.
I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
Has anyone got any working examples based on a similar experience?
I can provide a working example if required.The database has some elements that make SQL a tad harder to use?
For example:
CREATE TABLE staging
(person_id NUMBER(10) NOT NULL ,
title VARCHAR2(15) NULL ,
initials VARCHAR2(5) NULL ,
forename VARCHAR2(30) NULL ,
middle_name VARCHAR2(30) NULL ,
surname VARCHAR2(50) NULL,
dial_number VARCHAR2(30) NULL,
Is_Contactable CHAR(1) NULL);
INSERT INTO staging
(person_id, title, initials, forename, middle_name, surname, dial_number)
VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
CREATE TABLE person
(person_id NUMBER(10) NOT NULL ,
title VARCHAR2(15) NULL ,
initials VARCHAR2(5) NULL ,
forename VARCHAR2(30) NULL ,
middle_name VARCHAR2(30) NULL ,
surname VARCHAR2(50) NULL);
CREATE UNIQUE INDEX XPKPerson ON Person
(Person_ID ASC);
ALTER TABLE Person
ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
CREATE TABLE person_comm
(person_id NUMBER(10) NOT NULL ,
comm_type_id NUMBER(10) NOT NULL ,
comm_id NUMBER(10) NOT NULL );
CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
(Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
ALTER TABLE Person_Comm
ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
CREATE TABLE person_comm_preference
(person_id NUMBER(10) NOT NULL ,
comm_type_id NUMBER(10) NOT NULL
Is_Contactable CHAR(1) NULL);
CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
(Person_ID ASC,Comm_Type_ID ASC);
ALTER TABLE Person_Comm_Preference
ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
CREATE TABLE comm_type
comm_type_id NUMBER(10) NOT NULL ,
NAME VARCHAR2(25) NULL ,
description VARCHAR2(100) NULL ,
comm_table_name VARCHAR2(50) NULL);
CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
(Comm_Type_ID ASC);
ALTER TABLE Comm_Type
ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
CREATE TABLE phone
(phone_id NUMBER(10) NOT NULL ,
dial_number VARCHAR2(30) NULL);
Take the record from Staging then update:
'person'
'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
relates to the Phone table Primary_Key which is Phone_ID.
Does you head hurt as much as mine? -
Hi Experts,
Please tell me the places in OBIEE 11G where i can design the start schema.is it only in Physical Layer or In BBM too?
Thanks-BhaskarFinal point, for performance reasons you should also try to model data into star schema in the physical layer.
If the data is modelled as a true star then there are database features which optimise query performance. These features are set by a DBA when the Warehouse is configured (e.g. enable_star_transformations). The results with this parameter on/off can be staggering (query time reduced from minutes to seconds), showing the power of star schema.
When snowflakes occur, these performance features will not work as designed, and performance will be degraded. There are certain criteria that have to be met by the data e.g. bitmap indices on all of the foreign key columns in the fact.
Please mark if helpful / correct,
Andy
www.project.eu.com -
Error with creating star schema using HsvStarSchemaACM
Hi,
I am trying to create a star schema using the API HsvStarSchemaACM. But when calling the create function of API, i get the below exception. Exception from HRESULT: 0x80040251 (A general error occurred while trying to obtain a database Reader/Writer lock). Is this something to do with the parameters passed and its connections(connection works fine and createstarschema is running thru workspace).
Any solutions to this is more welcome.
Thanks,
LoguCan you provide details on HsvStarSchemaACM API? How is it related to ODI?
-
Resolving loops in a star schema with 5 fact tables and 6 dimension tables
Hello
I have a star schema, ie 5 FACT tables and 7 dimension tables, All fact tables share the same dimension tables, some FACT tables share 3 dimesnsions, while other share 5 dimensions.
I did adopt the best practices, and as recommended in the book, I tried to resolve them using Context, as it is the recommended option to Alias in a star schema setting. The contexts are resolved, but I still have loops. I also cleared the Multiple SQL Statement for each context option, but no luck. I need to get this resoved ASAPHi Patil,
It is not clear what exactly is the problem. As a starting point you could set the context up so that it only covers the joins from fact to dimension.
Fact A, joins Dim 1, Dim 2, Dim 3, and Dim 4
Fact B, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 5
Fact C, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 6
Fact D, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 7
Fact E, joins Dim 1, Dim 2, Dim 4 and Dim 6
If each of these are contexts are done and just cover the joins from fact to dim then you should be not get loops.
If you could lay out your joins like above then it may be possible to specify the contexts/aliases that should work.
Regards
Alan -
Why do we need SSIS and star schema of Data Warehouse?
If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
After that I want to create cubes using SSAS form Data Warehouse data.
I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
aggregates would be benefitial for the report performance.
I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
Is not it?
So if I do not need historical data, Can I avoid creating Data Warehouse?
On the backend SSAS uses queries only to extract the data
B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Star schema without a fact table?
Hi,
I'm preparing my warehouse for using with Discoverer and my question is about the star schema.
- Is a star schema directly associated with data warehouse?
- Can I talk about a star schema if a) I do not have a fact table (no summarized values) and b) if I do not have a dimension of time?
The problem is, I'm thinking of usine Discoverer but should I use it if it's not connected to a data warehouse?
As I told, I'd like to modelized my data "like" a star schema but my "center table" will contain only the foreign key of my dimensions; no time dimensions, no aggregate data in the center table (fact table).
Is there another word for the model I'd like to do?
Thank in advance.Hi,
Is a star schema directly associated with data warehouse?Not really, a star schema is just one where there is one large fact table joined to many smaller dimension tables using key fields. You usually see this in data warehouses.
Can I talk about a star schema if a) I do not have a fact table (no summarized values) and b) if I do not have a dimension of time?A star schema must have a fact table but it doesn't need contain summarised values or a time dimension.
You can use Discoverer with any Oracle database, it doesn't have to be a data warehouse.
Rod West -
Regarding Extended star schema
Hi Friends,
In Extended star schema,master data will load separately ,which will connect through sid's to dimension table .
My question is.. This master data tables can be used other than this cube ?
Please tell me i am in confusion.
Thanks in advace,
Regards,
ramnaresh.Hi
InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
The fact table and dimension tables are both relational database tables.
Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
Integration
You can create aggregates to access data quickly. Here, the InfoCube data is stored redundantly and in an aggregated form.
You can either use an InfoCube directly as an InfoProvider for analysis and reporting, or use it with other InfoProviders as the basis of a MultiProvider or InfoSet.
See also:
Checking the Data Loaded in the InfoCube
If the above info is useful, please grant me points -
Hi All,
I have a star schema in place with 8 dimension and 1 fact table.
But due to some specific requirement, I need to denormalize the schema. I want to copy all fields from all dimension tables to the fact table.
I know this sounds bad but I have to do it. pls dont ask why..
Now, the same can be done using a materialized view but the problem in MV is there are fields which are present in 2 or more table with the same column name, due to which I cant create a MV.
Is there some other way to achieve this goal.
BRK.I have too many records which is affecting the performance of the database.
But now its raising performance problems.Your application had a performance problem. Somebody guessed that a star schema would solve the problem. But it hasn't. So now you intend to implement some spavined variant on star schema because somebody has suggested that complete de-normalisation might solve the problem.
Is this a demonstrable fact (you have benchmarks and explain plans to justify it) or just a guess?
I know how difficult this sort of thing can be, because I've been working through some similar scenario for a while now. The important thing is get some decent metrics on your application. Use statspack. Use the wait interface. Find out where your application is spending its time and figure out what you need to do to reduce the waits. Benchmark some alternatives. This may result in you having to re-write your code but at least you'll be doing so in the knowledge
Cheers, APC
Blog : http://radiofreetooting.blogspot.com/ -
Design Fixed Assets Star Schema from OLTP DB
Hi,
Scope : Design Fixed Assets logical Star Schema for Demonstrate with OLTP tables
Our platform is Oracle 10g Forms and Reports deployed on Oracle 10g App Server._ At the moment we don’t have data warehouse constructed. We are pumping OLTP Data into Staging DB thru jobs and then by using materialized views getting the data in DWH DB which is in progress.
OBIEE 10g installed and working fine for testing.
We are planning to implement OBIEE for Reporting. As a starting point I would like to design Fixed Assets star schema to demonstrate FA reports in OBIEE from OLTP Tables.
FA- OLTP Tables:_
AC_UNIT_MASTR – Business Units
AC_ACNT_MASTR - Nominal Codes
AC_COST_CENTR_MASTR - Departments
AC_MANTN_ASTS - Assets transaction table
AC_AST_BAL_DETLS – Period wise Asset summary
AC_AST_BAL_DETLS_V - View
To achieve this
1).Import tables in physical layer and set physical joins
2).identify and create dimensions and set complex joins and then move to presentation layer.
Please suggest best approach to design Repository
Thanks
Regards,
Kulkarni
Edited by: hai_shailesh on Jan 25, 2012 11:36 PM
Edited by: hai_shailesh on Jan 25, 2012 11:38 PM
Edited by: hai_shailesh on Jan 25, 2012 11:39 PMHi Saichand,
Thanks for the response.
Already i referred that doc and completed practically. now i have to work on Finance data . as a starting point working with Fixed Assets module. already i designed centralised fact with dimensions as below
in physical&BMM layer defined relationship as below()
AC_UNIT_MASTR_D --> AC_AST_BAL_DETLS_F
AC_ACNT_YEAR_MASTR_D --> AC_AST_BAL_DETLS_F
AC_ACNT_PERD_MASTR_D --> AC_AST_BAL_DETLS_F
AC_COST_CENTR_MASTR_D --> AC_AST_BAL_DETLS_F
AC_AST_GRP_MASTR_D --> AC_AST_BAL_DETLS_F
AC_AST_SUBGRP_MASTR_D --> AC_AST_BAL_DETLS_F
and
AC_ACNT_YEAR_MASTR_D --> AC_ACNT_PERD_MASTR_D
When iam trying to create a dimension for Periods(AC_ACNT_PERD_MASTR) , Periods dimesnion created with two tables
AC_ACNT_YEAR_MASTR_D,AC_ACNT_PERD_MASTR_D
Please advice..
Regards,
Kulkarni -
How to convert database in 3NF to Star schema
Hi
Can anyone help me out to convert the database in 3NF to Star schema so that it can be used for Business Intelligence.
Our Application will be populating the database in normal 3NF form which may be difficult for the intelligence to generate queries from it. From the documents given, i saw that only star schema containing dimensions table and fact table is simpler to generate reports. Is there any tool that would convert the schema from 3NF to star schema?
Regards
S. KokilaHi,
I do not know any tools that would do this for you. You can apply a certain logic to create your dimensions and facts, but decisions will have to be taken along the way.
You need to know the basis of database design for datawarehousing. A few good books and articles are available on the subject.
Bonne lecture! -
Star schema for a uploaded data sheet
Hi All gurus,
I am new to this tech . I have a requirement like this , I have to prepare the star schema for this data sheet as below .
REPORT_DATE PREPARED_BY Units On-time Units Late Non-Critical On-time Non-Critical Lates Non-Critical DK On-time Non-Critical DK Lates
2011 -01 Team1 1
2011-02 Team1
2011-03 Team1
2011 -01 Team2
2011-02 Team2 7 1
2011-03 Team2 4 5
2011 -01 Team3
2011-02 Team3
2011-03 Team3 1 3
(Take blank fields as zeros)
Note : There are 3 report date types 2011-01,02,03 and three teams team 1,2,3 as text data and all others columns contain number data .
I am given Time as dimensional table containing the Report Date and Whole sheet as Data table . So how to define the relationship for this in Physical and BMM ?
I am thinking to make Time as Dimensional Table and the whole table(as Data) as a fact table in the Physical layer . And then in the BMM , I want to carve out a Logical Dimension called Group from the Data Physical Table and then make Group and Time as dimensional Table and Data as Fact table .
Is this approach is correct ? please suggest me and if have any better Idea ,then please note down what are the tables to be taken as Dimension and Fact table in both physical and BMM . Your help willl be appreciated ,so thanks in advance . You can also advice for any change in no of Physical tables in the Physical schema design ..Your' s suggestion utterly anticipated ..
-
Star schema design, metrics dimension or not.
Hello Guys,
I just heard from one of my colleagues that its wise to
have an "KPI" or "metrics" dimension in my DWH star schema (later used in OBIEE).
Now, we have quite a lot of data 100 000 rows per day (botton leve, non-aggregated, the aggregations are obviously far less then that, lets say 200 rows per day) and
we have build pre-aggregated data marts for each of the 5 very static reports (OBIEE Publisher).
The table structure is very simple
e.g.
Date,County,NumberofCars,RevenuePerCar, ExpensesPerCar, BreakEvenPerCar, CarType
One could exclude the metrics "NumberofCars","RevenuePerCar", "ExpensesPerCar", "BreakEvenPerCar"
and put them into a metrics dimension.
MetricID Metric
1 NumberofCars
2 RevenuePerCar
3 ExpensesPerCar
4 BreakEvenPerCar
and hence the fact table design would be simpler.
Date,County,MetricID,Metric, CarType
Disadvanatages: A join is required
We would have to redesign our tables
tables are not aggregated anymore for specific metric types
if we notice performance is bad, we would need to go back to the old design
Advantages : Should new metrics appear, we dont have to change the design of the tables
its probably best practice
Note: date, country and cartype are already dimensions. we are just missing one to differentiate the metrics/KPI's
So I struggle a bit, what should I do? Redesign, or stick to the way I have done it, having
performance optimization in mind.
Thanks"Usually the date is stored in sales table or product table.
ut here why they created separate Dimension table for date(Dim_date)? "
You should provide the link.
A good place to start with the basic concepts is :
http://www.ralphkimball.com/
Pick up some of his books and start going through them.
My recommendation would be
The Data Warehouse Toolkit, 2nd Edition: The Complete Guide to Dimensional Modeling
John Wiley & Sons, 2002 (436 pages
Good Luck.,
Maybe you are looking for
-
Mac mini with 10.6 installed - dual boot 10.8?
Hi all I have an a Mac mini with 10.6.8 installed (think it came with 10.5) and would like to buy Mountain Lion from the Apple Store and install it as a dual boot setup. I think the hardware is too old for 10.10 (although it does give me the option t
-
I need to find the tabs I had open from yesterday as today they didn't open
I had about 15 plus tabs open yesterday. Today I opened up, but they didn't open up, so I closed down and reopened. But they still are not there. They are not in recent history, as that shows every time I open my email etc etc... So I want to find th
-
Help required for MSS configurations on Portal
Hi Experts, Can someone please send me Portal Configuration guide or some kind of document on how to configure MSS --> Overview and Organisation services. Do I need to pass any parameters or any other configurations required? Please help me. Ea
-
Average images from camera with IMAQdx
Hi, I need to average acquisition images from a camera and than subtract the average from a background image acquisition. Althought I'm using an ethernet connection the program is REALLY slow. I'm new with Labview so any suggestion is very wellcome..
-
Layer 2 security with WLAN auto-anchor mobility
Hello, I was wondering if Layer 2 security can be used with auto-anchored WLANs. I need to deploy two new isolated WLANs which will terminate in two DMZ environments. I was hoping to use the existing WCS-managed infrastructure with 4404 and 4402 WLCs