Calculations during Data Modeling
Hey guys,
Is there a way to do calculations using key figures during data modeling or it has to be done at the query level?
Thank You
Dezi,
You can make them in the Transfer Rules or Update Rules.
The advantage of doing them in the queries is that if the specs for the formula change, you just need to modify the logic in the query. Whereas if you have the values "harcoded" in the cube or ods, you'll need to reload the data or come up with something else to fix them.
Of course, having too many calculations in the queries can create performance issues.
It's all a balance...
Regards,
Luis
Similar Messages
-
M:N relationships within a dimension: Standard process vs. BI Data model
Hi,
I just completed a review of the u201CMulti-Dimensional Modeling with BIu201D from this link:
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84 and I have a quick question here:
On page 36 of this link, the author noted that
u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
i.e. my understanding is that, based on Standard Process it is NOT possible to place two characteristics in M:N relationships in the SAME dimension but with BI Data Model, this, the author points out is possible
u201Cdue to the usage of surrogate keys (DIM-IDs) in the dimension tables allowing the same material several times in the dimension tableu201D i.e. although material is the unique key of the dimension.
1.
What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
2.
It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
3.
Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
u201CIf you can avoid compounding - do it!
Compound attributes always mean there is an overhead with respect to:
Reporting - you will always have to qualify the compound attributes within a query
Performance
Compounding always implies a heritage of source systems and just because it makes sense within the
source systems does not necessarily mean that it will also make sense in data warehousing.u201D
ThanksHi Amanda.......
In a dimension table, any number of semantically related dimension attributes are stored in a hierarchy (parent-child relationship as a 1:N relationship). If an M:N relationship exists between dimension attributes, they are normally stored in different dimension tables.
I checked the document.........
On page 36 of this link, the author noted that
u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
1.
What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
Here the first thing that I want to tell u is that............the diagram shown here is Classic Start Schema............since Extended Star Schema will never store Master data in Dimension tables.........it stores Masterdata in seperate Master data tables..........and nowadays..............Classic Star schema is obsolet.......Dimension table will only store Dimension id and SID......
Now the Standard process is that..........anything which is Describing a master data..........can be added as an Attribute of that master data.......
Suppose........Employee is the Masterdata.then Ph no can be one of the Attribute of this master data......
So this the Standard Process.........but this cannot be followed every time.........why........already explained.....
2.
It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
Bcoz ..........we use compounding Characteris tic to define the Master data uniquely.........and we load compounding Characteristic seoerately...which is independent of the Master data........ie......compounding Characteristic there is a seperate master data tables...........so ..........problem resolved......
3.
Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
u201CIf you can avoid compounding - do it!
Compound attributes always mean there is an overhead with respect to:
Reporting - you will always have to qualify the compound attributes within a query
Performance
Compounding always implies a heritage of source systems and just because it makes sense within the
source systems does not necessarily mean that it will also make sense in data warehousing.u201D
For Compounding Characteristic............u hav to laod the Coumpoundede master data seperately..which is a overhead...........moreover while query execution......two tables will be accessd..which may result a performance issue.......Performance can be affected when compounded characteristics are used extensively, particularly when a large number of characteristics are included in a compounding. In most cases, the need to compound is discovered during data modeling.
Regards,
Debjani......... -
Data modeling with calculated filelds
Hi,
I have a requirement where I have to model the (tables?) for my huge warehouse data base to reflect some recomputed features for my clients business
It would be easy for me to explain by giving some table structure along with some data.
My base table looks like following...
ARTICLE_NUMBER YEAR_AND_WEEK SALES_QUANTITY SALES_PRICE
10000001 201140 110 24.3
10000002 201140 120 13.5
10000003 201140 100 5.7
20000001 201140 80 57.1
20000002 201140 70 28.9
20000003 201140 60 56.3Following are the things I have to model
1) Need to have 'Previous year_and_week' values along with current. Tried this to bring in reporting rather then at DB level but taking long time to compute and decide to store them physically instead of calculation during the run-time. Like following way....
ARTICLE_NUMBER YEAR_AND_WEEK SALES_QUANTITY SALES_PRICE PREV_YEAR_AND_WEEK PREV_SALES_QUANTITY PREV_SALES_PRICE
10000001 201240 110 24.3 201140 110 24.3
10000002 201240 120 13.5 201140 120 13.5
10000003 201240 100 5.7 201140 100 5.7
10000004 201240 100 5.7 201140 NULL NULL
20000001 201240 NULL NULL 201140 80 57.1
20000002 201240 70 28.9 201140 70 28.9
20000003 201240 60 56.3 201140 60 56.3Things to note here is few articles may not be there in the future year and may few new articles come as well. This will effect next aggregate level for article where we have to show the sum of article values at Product level which is the next level of hierarchy. For assumption take 1000000* range as one Product and 2000000* as another product then aggregation will be summed values of these.
2) Need to show additional columns for currency columns to show conversion values to Euro, like for SALES_PRICE column we need additional column SALES_PRICE_EURO ( converted from local currency to Euro). Additional challenge here is business wants a static EURO conversion each year, means all EURO column values needs an updated values at every year.
3) Third one is truly complected one to understand, but let me try. As this data granularity is week level, business don not want to do comparison some results for two weeks i.e. first&Last weeks of the year. I have a solution in my mind for this to use one degenerated dimension column to show at every fact record whether that week is first or last and use 11g new feature virtual columns to compute COMP columns.
so the end of additional columns for just one SALES_PRICE will be like this
SALES_PRICE SALES_PRICE_EURO PREV_SALES_PRICE PREV_SALES_PRICE_EURO SALES_PRICE_COMPLooks very complex requirement ! isn’t it?
Appreciate your inputs.
Thanks
HeshI don't think I understand it at all, but if you are looking for an example of how to use virtual columns to create a function-based index on a limited subset of the data, see http://www.oracle.com/technetwork/issue-archive/2012/12-jul/o42asktom-1653097.html
Trying to optimize or denormalize too early is always a danger, but sometimes DW's and aggregations make for odd bedfellows. -
Error during date calculation: Time entered not numerical
Hi All,
We are using SAP MI 7.0 SP18 Server and Client
xMAM 3.0 SR06
Now when we create a Notification using the link Notification Management of type PV and SAVE.
When Synchronized, it is completed but backend no notification is trigerring and when checked in Error Logs
the following message is shown Error during date calculation: Time entered not numerical
Kindly Suggest
Regards,
Kiran JoshuaHi All,
I got to know how to edit the entries from merep_mon : Thanks to Chinna.
But unfortunately that did not solve my problem
I had to debug the function module of the respective SyncBO and found that the Profile type was not maintained in the backend.
Infact the error "Error during date calculation: Time entered not numerical" was misleading.
Anyways upon customising the Profile for the PJ and PV type Notifications that solved my issue.
Regards,
Kiran Joshua -
PowerPivot data model as Source on Cloud
Hi All,
We have an excel workbook with a heavy powerpivot model. Can we use any Cloud services to segregate the powerpivot model and use it as a data source?
We have already accessed the PowerPivot Gallery on Sharepoint(On-Premise) and Tabular Cube solutions but both of them are On-Premise solutions and since we need a Cloud based solution; we have rejected both of them.
Also, regarding Cloud solution(if there's any), will it answer the following problem statements:-
Will it be possible to drive a light weight excel report by Powerpivot data model?
On Premise database based report refresh (using gateway)?
PowerPivot /Calculated Measures – will they be visible through data model?
Regards,
AmitThis is not currently possible, but I understand that it is in the works.
John -
I have created a PowerPivot data model using Excel 2013. It runs fine on my computer (the one where I built it), but will not open in other computers. All computers have the same 64 bit version of PowerPivot 2013 and run on Windows 8.
The primary data model table contains about 170,000 rows, 20 raw data columns, 2 calculated columns, 30 calculated fields (measures), and there are also a couple of small linked-back data tables with relationships to the primary
table. The data model takes approximately 5 minutes to refresh using my computer when I click 'Update All' (pertains to linked-back tables) on the PowerPivot ribbon in Excel. 99% of that 5 minutes is spent reading the data. However the
refresh time is not really my concern, as the data model will not open at all on any other computer.
It might be worth mentioning that my data model utilizes the 'stdev.p' function (new to PowerPivot 2013). However, I would not expect this to matter since all computers I am trying to run on have PowerPivot 2013.User, are you still looking for the answer?
Thanks!
Ed Price, Azure & Power BI Customer Program Manager (Blog,
Small Basic,
Wiki Ninjas,
Wiki)
Answer an interesting question?
Create a wiki article about it! -
How to get data model to recognize structure change
I have an extremely complex report that was built almost entirely manually -- except for using the Data Wizard to create the Data Model (multiple queries with multiple groups). The source view that the query Q-1 is based on has changed. It is a change in the precision of the column. How do I get the data model to notice that change? The Width in the Property Palette is still showing 8 when it should now be 16. This is causing the display on the report to be truncated to 8 characters.
I don't want to have to rebuild the data model. It took me 3 days to get it so that it works!
ThanksPLS OPEN THE QUERY AND THEN CONNECT USING YOUR USER. I THINK WHEN YOU REFRESH THE QUERY AND CONNECT THEN WIDTH IS CHANGE AS THE COLUMN IN THE DATABASE.(i.e. 16)
THANKS,
FAROOQ Thanks for the suggestion. Unfortunately, the same thing happens when I do this as happens if I modify the query with a space (as before); the rows are removed from their groups, and I lose the Source of my calculated fields.
If anyone is interested, I finally found a way to do this:
1. Convert the report into a text file (File>Administration>Convert);
2. open the text file in a text editor;
change the offensive WIDTH settings (make sure to look for all of them);
3. save the text file;
4. then convert it back to a RDF file using the converter.
A round-about way to perform what should be an automatic operation -- but there it is.
--V -
What are the steps needs to be perform to define a data model
What are the steps needs to be perform to define a data model
a. Information Gathering
b. Hardware & Software
c. Structure the Information MDM
d. Transfer into a Physical Model BW data model
e. Explore use of existing model Business ContentHi Siva,
Very first activity is Proof of concept (POC), here we have to show How BW works for clients Reporting requirement during POC we as BW consultants need to creat some sample BW back end objects and also some reports based on business requirement. This is very crucial stage as Client Judge wethere BW would meet his demands. If every thing is fine then next step is Bidding.
After Bidding the contract then actual Project intiates. from here we follow ASAP methodology.
<u><b>
Project Preparation</b></u>
Here Senior Consultant would go to Client place for Business Process Transistion and to know in what way the Architectue should be set up so here Basis People will come into picture.
<u><b>Blue Print</b></u>
Transistion could be understood by BRD (Business Requirement Documents) BRD states what is exact requirement and it is given by End users.
By seeeing thr BRD's we as BW consultants must prepare APPlication Design Documents . The Application Design Documents states all the Technical aspects that needs to be performed as BW backend and also as well as Frontend.
<u><b>Realization Phase</b></u>
Here as BW consultants we must start confuguring as mentioned in ADD's by taking care of all performance aspects, once all the configuration are done we do Unit testing. unit testing is an activity where we check all the design process wether it running correct or not. After unit testing we move the design objects to Qulaity for Integration Testing.
<u><b>Testing</b></u>
Here end user will check all the objects, reports on end to end basis. after integration testing is done then UAT ( User acceptance Testing) would come into picture where user check each process and sign off.
<u><b>Go live</b></u>
Here all the object will be moved to Production where end users and power users can start working on the system. intially for some days Development team would be taking care of all support activites there after it will be transitoned to support team.
Hope It helps you.
Assign Points if it usefull
Regards
Sujan -
How can we get the value of the key field in a custom data model using governance API?
Dear Team,
How can we get the value of the key field in a custom data model, to be used for manipulation of the change request fields using governance API?
Any kind of help would be sincerely appreciated.
Thanks & Regards,
Tushar.Hi Michael,
Thanks for direction. Let me give more context on this as I'm interested to get more details..One of the issue was to read cross entity field values on UI based on user action and set other entity field behaviour...It is similar to what is being posted here.
For ex: Reading MTART from Basic Data UIBB in MM MDG UI and set the field properties in some other custom entities say ZZETEST. This cannot be done using UI BADI as it only supports single entity at a time and not cross entity. So alternatively we found a solution where we can enhance existing PLMB feederclass cl_mdg_bs_mat_feeder_form by reading the model and the entity as needed as it it proved that it supports cross entity UI field behaviours and so business requirements.
This is a workaround for now.
So the question is How do we achive it using governance API for cross entity field behiaviours.?or what is the right way doing this.
Can we do that using governance API and its' methods?
In the Governance API doc you provided below has referring to below external model as part of gevernance API.
The active or inactive data (before or during the derivation or the check) can be read
with the external data model interface IF_USMD_MODEL_EXT with the method READ_CHAR_VALUE and
the corresponding READ_MODE parameter. To avoid unnecessary flushes (derivations), the NO_FLUSH
parameter should b
e set to ‘X’.
Thanks
Praveen -
Data modeling dilemma for EAV oriented problems in Data Modeler
Hello,
Dealing with EAV( entity attribute value model ) oriented structure of data.
So this look like this:
Entity( Entity_Id number , Entity_Name varchar2, Entity_Desc varchar2 )
Entity that list attributes and some meta data on characteristics of attributes:
Type_Of_Attribute( Attr_Id varchar2, Type_Of_Value TOV_Domain, Unit_Of_Value varchar2 , Min_Value variant_type , Max_Value variant_Type )
Then we have actual data. Entity is described with set of attributes and their values. So aditionally to attributes on row form in Entity there are aditional attrbutes in columnar form.
Because of sparsity...
However there in columnar form the challenge or issue is type of values so domains of attributes.
For example:
weight_of_ person is number between min_number and max_number.
But another parameter for example mood_of_person is string from the domain which consists of set of strings/descpriptions.
Another possibility could be reference to some table of values ( key value ) that could be modelled as one:many relationship if put in entity Entity on row form of attributes.
But since this is attribute relate only to few intances or it is very dispersed....and for preserving table form..it was put in columnar form ..
Attribute_Of_Entity( Entity_Id, Attr_Id, value,
-- when not normalized, could be add also unit like kg or lbs or inch or piece ).
My question is on good/succesfull practice of modelling for VALUE in attribute_of_entity?
Somewhere read that some databases have feature of so-called variant type.
Guess the objective is to modell in such a way that implementation of this model is as easy as possbile in
issues like:
a) validating column oriented form during entering or updating values
b) consolidating queries when reporting
c) agregating data when grouping when grouping data and preventing non-comparable data.
So to implement value as structure/complex_type with methods or there is any other feature supporting variabilty of the data along same column in the table. So logical design that would not cause too much complexity in the relational design and table implementation and procedures are handled as much as possible on the database level?
Thank you in advance for comments, experiences, suggestions,Hello,
EAV is rarely a good solution. Tell us about your business problem and we might be able to show you solutions that are performant/easier to maintain/...
https://www.simple-talk.com/opinion/opinion-pieces/bad-carma/
Regards
Marcus
BTW: this question should not be asked in the forum space for the tool SQL Data Modeler. Instead ask in SQL and PL/SQL or General Database Discussions -
How can I find out what is causing this error in SQL Developer Data Modeler
Friends,
I am trying to import entities into into SQL Developer Data Modeler from Oracle Designer 10.1.2.3.
In case of need I perform these steps to perform the import:
File --> Import --> Oracle Designer Model --> Select database connection --> Select work area --> select application system --> select one entity --> Click finish --> Import starts
During the import process I see an alert dialog box with the message:
There are errors in import - check Log file Clicking Ok dismisses the alert box and I see the following summary screen:
Oracle SQL Developer Data Modeler Version: 2.0.0 Build: 584
Oracle SQL Developer Data Modeler Import Log
Date and Time: 2010-08-09 14:27:26
Design Name: erdtest
RDBMS: Oracle Database 10g
All Statements: 32
Imported Statements: 32
Failed Statements: 0
Not Recognized Statements: 0The Entity is then displayed in the Logical View within SQL Developer Data Modeler.
Upon checking the log file I see the following entry:
2010-08-09 13:50:34,025 [Thread-11] ERROR ODExtractionHandler - Error during import from Designer Repository
java.lang.NullPointerException
at oracle.dbtools.crest.imports.oracledesigner.logical.ODORelation.createArcs(Unknown Source)
at oracle.dbtools.crest.imports.oracledesigner.logical.ODORelation.generate(Unknown Source)
at oracle.dbtools.crest.imports.oracledesigner.ODExtractionHandler.generateDesign(Unknown Source)
at oracle.dbtools.crest.imports.oracledesigner.ODExtractionController$Runner.run(Unknown Source)
at java.lang.Thread.run(Thread.java:619)Can anyone shed any light on this error?
Thanks in advance for any help you may be able to provide.No this helps a lot. It's not strange. Firstly, in a versioned repository you should see Private Workareas and Shared workareas, so your workarea may be in either of these. It won't be in the Global Shared Workarea, as this only for non-versioned repositories. (I like to open the RON by selecting the full Repository, that way I can see the private and shared worlareas and the configuration and containers all in the same tree.
Now your workarea is defined by a set of rules, so when you expand the workarea in the RON, and select the object, then that's the workarea and object you'll see in the import dialog in the Data Modeler. So if you check it out and check it back in, and can't see it in the RON, then the rule is not seeing this object. (Did you refresh the workarea in the RON?) If you can't see it in the RON, you can't see it in the Data Modeler. If you're working in a versioned repository, you need to work in the specific work area, i.e V27 and this is what you need to select in the Data Modeler.
It looks like you are selecting the wrong workarea in the Data Modeler.
Sue -
Domains usage in SQL Developer data MODELER
Hi,
I'm trying to understand how to use Domains in Oracle SQL Developer Data Modeler. We use version 3.1.3 . before I used Toad Modeler where domains are just part of your main design.
Oracle data modeler has some different concept.
let's assume I'm working on 2 designs: DesignA and DesignB that include relational models.
DesignA and Design B should use domains but list of domains in design A is very different than in design B.
Default domain file is located on c: drive where SqlModeler is installed. It is obviously unacceptable , so I need to change Default System Type directory in preferences.
And of course I want to have different domain directories for DESIGN A and DESIGN B.
So when I open design A then I changed Default System Type directory let's say to x:\AAA. Then i close design A and open Design B and change Default System Type directory to x:\BBB
I checked folders AAA and BBB and they have necessary XML files there: defaultdomains.xml, defaultRFDBSSites and so on....
Now questions:
can I rename defaultdomains.xls to something else like AAAdomains.xls? Domain administration can edit any domain file with any name , but how can I associate certain domain file with my design? My wish , when I open my design , then corresponding domain file will be open automatically. Is it possible?
If I open 2 designs in Sql Modeler and switch between designs then corresponding domain files should be changed automatically as well. Currently I shouldn't forget to change default System Type directory every time when I switch models. Is it the only way to handle it?
Thanks
vitaliyHi Vitaliy,
We use version 3.1.3
I recommend always to use the latest version. If you don't want to use beta (DM 4.0 EA is out) you can use DM 3.3.
Otherwise Oracle SQL Developer Data Modeler supports two types of domains:
1) DM installation domains - those in file defaultdomains.xml
2) Design level domains - they are stored in design directories and are visible to particular design only. They can be created in following ways:
2.1 Manually - there is a property "Domains file" and you id it's not set "defaultdomains" domain will become design level domain and will be stored in file with provided name (without .xml extension)
You can change later the file for design level domains, however you cannot change file for domain already in defaultdomains.xml.
2.2 Using types to domains wizard you can generate design level domains
2.3 Design level domains are created during import of DDL files (controlled in preferences)
2.4 You can import domains from specific file with domains using "File>Import>Domains" - you need to rename the source file if it's named defaultdomains.xml otherwise you'll get domains as installation domains
If the list with domains is too long you can define a list with preferred domains (or/and logical types) in "Preferences>Data Modeler>Model" and you can use shorter list in Table/Entity dialog if check "Preferred" check box next to "Type:" combo box.
If I open 2 designs in Sql Modeler and switch between designs then corresponding domain files should be changed automatically as well
If you open 2 designs in one instance of DM they will use the same file with default domains i.e. you'll lose domains in one of design depending of setting for "system data type directory". You need to go with design level domains.
Philip -
Sharepoint 2013 Reporting Services & OLAP Cubes for Data Modeling.
I've been using PowerPivot & PowerView in Excel 2013 Pro for some time now so am now eager to get set up with Sharepoint 2013 Reporting Services.
Before set up Reporting Services I have just one question to resolve.
What are the benefits/differences of using a normal flat table set up, compared to an OLAP cube?
Should I base my Data Model on an OLAP Cube or just Connect to tables in my SQL 2012 database?
I realize that OLAP Cubes aggregate data making it faster to return results, but am unclear if this is needed with Data Modeling for Sharepoint 2013.
Many thanks,
MikeSo yes, PV is an in-memory cube. When data is loaded from the data source, it's cached in memory, and stored (compressed) in the Excel file. (also, same concept for SSAS Tabular mode... loads from source, cached in mem, but also stored (compressed) in data
files, in the event that the server reboots, or something similar).
As far as performance, tabular uses memory, but has a shorter load process (no ETL, no cube processing)... OLAP/MDX uses less memory, by requiring ETL and cube processing... technically tabular uses column compression, so the memory consumption will be based
on the type of data (numeric data is GREAT, text not as much)... but the decision to use OLAP (MDX)/TAB (DAX) is just dependent on the type of load and your needs... both platforms CAN do realtime queries (ROLAP in multidimensional, or DirectQuery for tabular),
or can use their processed/in-memory cache (MOLAP in multidimensional, xVelocity for tabular) to process queries.
if you have a cube, there's no need to reinvent the wheel (especially since there's no way to convert/import the BIDS/SSDT project from MDX to DAX). If you have SSAS 2012 SP1 CU4 or later, you can connect PV (from Excel OR from within SP) directly to the
MDX cube.
Generally, the benefit of PP is for the power users who can build models quickly and easily (without needing to talk to the BI dept)... SharePoint lets those people share the reports with a team... if it's worthy of including in an enterprise warehouse,
it gets handed off to the BI folks who vet the process and calculations... but by that time, the business has received value from the self-service (Excel) and team (SharePoint) analytics... and the BI team has less effort since the PP model includes data sources
and calculations - aside from verifying the sources and calculations, BI can just port the effort into the existing enterprise ETL / warehouse / cubes / reports... shorter dev cycle.
I'll be speaking on this very topic (done so several times already) this weekend in Chicago at SharePoint Saturday!
http://www.spschicagosuburbs.com/Pages/Sessions.aspx
Scott Brickey
MCTS, MCPD, MCITP
www.sbrickey.com
Strategic Data Systems - for all your SharePoint needs -
Best practice on extending the SIEBEL data model
Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
ThanksAccepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
Good luck!
Jim -
Domains and Logical Types in Data Modeler
Been out of serious design work for a long time. Can someone provide a good pointer to descriptions on Domains and Logical Types and presented in the Data Modeling tool?
For instance I am having trouble tracking the following distinctions:
Domain Logical Type
LongText Varchar
ShortText Char
Text Char
NText NTEXT
NVarchar NVARCHAR
CHAR and VARCHAR are listed as Logical Types but not Domains. There is a TEXT logical type, but ironically, it does not correspond to the Text Domain. Varchar2 appears in neither list. I believe I ready the N* domains/types are for international characters (multi-byte?), but basically see no pattern here so was hoping someone could straighten me out.
Thanks,
Robert KuropkatHi Robert,
Logical types are abstraction for native data types in supported databases. You need logical types if you want to import from database or DDL script (mapping of native to logical is important here) or want to generate DDL script (mapping of logical to native). You can delete all logical types (only "unknown" has importance) and create your own logical types. In this case you have to map them to native database types. If you use only Oracle database then you can delete types related to other databases. Of course you can rename existing logical types if you don't like how they are named.
Domains are based on logical types - you need logical type in order to have valid domain definition. Provided domains are just sample. You can delete them - the only important here is "unknown". You can create two types of domains (it's usage point of view) - 1) per installation - common for all designs; 2) per design - they appear only for design they are defined. You also can import domains.
Also domains are automatically created during import of DDL script - it's kind of data type aggregation - domain is created for each used data type.
Best regards,
Philip
Maybe you are looking for
-
Message at the variable screen
Hello, I would like that message pops up at the variable screen during the execution of the report. How can I implement this functionality? Thanks
-
Valuation Difference is blocking the release of Credit Memo to FI
Hi, We had created an Invoice on 03rd May with billing date 25th April, and then released this invoice to Finance. Now we have created a Credit memo for that invoice, and trying to release to Finance but system is giving the following error "Reversal
-
Problems with archive mailbox in Outlook, with exchange 2010+2013 co-existence
Hi! We have an Exchange 2010 environment plus one Exchange 2013 MBX server meant for archive mailbox purpose. (We are going to upgrade to 2013 but with over 20.000 mailboxes it will take som time. Therefore we need to co-exist with primary mbx on 201
-
WLC 5508 cannot have similar user logged twice !
Dear Support Community, I was having users on a Cisco WLC 440x controllers. Some service accounts were logged several time with the same AD-Account. Since I migrated them on the new controller (5508), it seems that we cannot have the same AD user log
-
Got an error while installing oracle 10gR2 on RHEL 4
Hi Everybody, After running ./runInstaller while installing oracle 10gR2 on RHEL4 ,i got an error message showing below can anyone guide me how can i proceed furthur? As Follows:- [oracle@localhost database]$ ./runInstaller Starting Oracle Universal