Oracle9i Clickstream Intelligence Data Model Reference

OTN Member Since: Dec, 2000
Oracle9i Clickstream Intelligence Data Model Reference      Jan 3, 2004 7:05 AM
     Reply
Hi!
I'm interested in knowing a little more about Oracle9i Clickstream Intelligence.
I've seen in Oracle9i Clickstream Intelligence User Guide and in Oracle9i Clickstream Intelligence Administrator's Guide, a reference to
Oracle9i Clickstream Intelligence Data Model Reference, but I can't seem to find it.
Can anyone tell me where can I find this doc?
Thanks in advance,
Carla Lopes

Hi Carla,
Apologies for the delayed response.
Please try the following link for the Data model reference:
http://download.oracle.com/docs/pdf/A96129_01.pdf
Regards,
Les

Similar Messages

  • Oracle Business Analytics Fusion Edition Data Model Reference

    Dear All,
    I want to make a study of the pre-built warehouse data schema, and there's a book named <Oracle Business Analytics Fusion Edition Data Model Reference> from the installation and configuration guide, but i cannot found it in the metalink and metalink3.
    Could anybody tell me how to access the reference or provide me a link to it?
    Any help would be highly appreciable.
    Best regards,
    Zigzag

    Oracle Business Analytics Warehouse Data Model Reference Version 7.9.6 (Doc ID 819373.1)
    Oracle Business Analytics Warehouse Data Model Reference Version 7.9.6 Applies to: Business Intelligence ... Warehouse. This data model reference document
    Doc Type:REFERENCEModified:5/5/2009
    https://metalink3.oracle.com/od/faces/secure/km/DocumentDisplay.jspx?id=819373.1&h=Y

  • Business Intelligence data model?

    Is there a publically available model for the Business Intelligence objects? My company would like to be able to read against these objects using JDBC calls.
    Dick Dawson

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Richard Dawson ([email protected]):
    Is there a publically available model for the Business Intelligence objects? My company would like to be able to read against these objects using JDBC calls.
    Dick Dawson<HR></BLOCKQUOTE>
    null

  • Fusion Intelligence (DBI) content or data model

    Hi all,
    At the moment we are implementing Fusion Intelligence (mainly financial). In order to determine to what extend the pre-build reports / dashboards matches the reports described in the functional design we need to know to what eBS objects the objects in the presentation layer of OBI EE are mapped to.
    Can anyone help me getting a technical model of Fusion Intelligence that is usefull for the goal described above. Any suggestions are welcome!
    Thanks,
    job

    Hello
    It uses the same materialised views as DBI. Therefore, if you have the data model for DBI then you have the data model for Fusion BI. The materialised views are created by request sets in the EBS and i am not sure if documentation exists. Maybe try metalink

  • Problem connecting PowerPivot's (Office 2013) Data Model deployed on SharePoint 2013 to data source.

    Hello:
    Our configurations is as follow :  SharePoint 2013 is on Server A;  SQL Server Analysis server (SQL 2014) is on DB Server B;   
    SharePoint databases (sp_ ...) and our Data Mart (SQL 2014) are on server B. All servers runs Windows 2012 OS.
    On my desktop I built a simple Excel 2013 workbook with PowerPivot Data Model that imported several tables from our Data Mart (server B above). Then created a Power View report. Locally everything works fine. 
    But when I uploaded this workbook to our SharePoint PowerPivot gallery and was trying to refresh data, I got the connection error: It’s very long but the ErrorCode is “rsCannotRetriveModel”. The end of the error message is:
    'TemporaryDataSource'.</Message><MoreInformation><Source>Microsoft.AnalysisServices.SPClient</Source>
    <Message>Call to Excel Services returned an error.</Message>
    <MoreInformation><Source></Source><Message>We were unable to refresh one or more data connections in this workbook.
    The following connections failed to refresh:ThisWorkbookDataModel</Message>
    <MoreInformation><Source>Microsoft.Office.Excel.Server.WebServices</Source><Message>
    We were unable to refresh one or more data connections in this workbook. The following connections failed to refresh:ThisWorkbookDataModel</Message></MoreInformation>
    </MoreInformation></MoreInformation></MoreInformation></MoreInformation><Warnings xmlns="http://www.microsoft.com/sql/reportingservices" /></detail>
    Our Excel Services on the SharePoint work fine and refresh data on different excel workbooks (with no Data Model) just fine.  We are using an unattended account for Excel Services to connect from SharePoint server to our databases. Found a few references
    on the topic, tried them but with no luck.
    Please advise!
    Regards
    -Jeff
    Jeff Gorvits

    Hi Jeff,
    Firstly, I need to confirm whether you are refreshing Data connection in browser, since Data Refresh is not supported in Office Web Apps. Please refer more information in this article:
    http://blogs.technet.com/b/excel_services__powerpivot_for_sharepoint_support_blog/archive/2013/01/31/powerpivot-for-sharepoint-browser-refresh-fails-data-refresh-not-supported-in-office-web-apps.aspx
    From the error "Call to Excel Services returned an error", please verify the location of the
    data source, for example an Excel workbook, is registered as a trusted location with Excel Services:
    https://technet.microsoft.com/en-us/library/jj219699(v=office.15).aspx
    Since you are using unattended account for Excel Services to connect from SP to databases, I wonder if the issue occurs to unattended referesh, if so, please refer to:
    http://social.technet.microsoft.com/wiki/contents/articles/3870.troubleshoot-powerpivot-data-refresh.aspx#Problems_using_the_Unattended_data_refresh_account
    Regards,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected] .
    Rebecca Tu
    TechNet Community Support

  • Re: [iPlanet-JATO] Data model(Dataobject in Nd5)

    Sn,
    Computed columns need special attention.
    The migration tool creates a QueryFieldSchema for each Model (DataObject).
    The schema is populated with entries for each of the "data fields" needed by
    that query. For example:
    FIELD_SCHEMA.addFieldDescriptor(
    new QueryFieldDescriptor(
    FIELD_NDNWORDERS_ORDERID,
    COLUMN_NDNWORDERS_ORDERID,
    QUALIFIED_COLUMN_NDNWORDERS_ORDERID,
    Integer.class,
    false,
    false,
    QueryFieldDescriptor.APPLICATION_INSERT_VALUE_SOURCE,
    QueryFieldDescriptor.ON_EMPTY_VALUE_EXCLUDE,
    The query field descriptor describes the "metadata" for a given field:
    QueryFieldDescriptor(java.lang.String logicalName, java.lang.String
    columnName, java.lang.String qualifiedColumnName, java.lang.Class
    fieldClass, boolean isKey, boolean isComputedField, int insertValueSource,
    java.lang.String insertFormula, int onEmptyValuePolicy, java.lang.String
    emptyFormula)
    There is also a SQL Template generated for select statements:
    static final String SELECT_SQL_TEMPLATE="SELECT ALL ndnwOrders.OrderID,
    ndnwOrders.CustomerID, ndnwOrders.EmployeeID, ndnwOrders.OrderDate,
    ndnwOrders.RequiredDate, ndnwOrders.ShippedDate, ndnwOrders.ShipVia,
    ndnwOrders.Freight, ndnwOrders.ShipName, ndnwOrders.ShipAddress,
    ndnwOrders.ShipCity, ndnwOrders.ShipRegion, ndnwOrders.ShipPostalCode,
    ndnwOrders.ShipCountry FROM ndnwOrders __WHERE__ ";
    public
    So for computed columns you need to do two things:
    1. You need to adjust the SELECT_SQL_TEMPLATE to reflect your computed
    column instead of the "simplistic" column name that appears there by
    default.
    2. You have to "alias" that column and provide the value of the alias in the
    columnName position of the QueryFieldDescriptor. This is needed so that the
    model will be able to dereference the computed column in the result set.
    ----- Original Message -----
    From: "SNR R" <snr@s...>
    Sent: Thursday, July 12, 2001 8:30 AM
    Subject: [iPlanet-JATO] Data model(Dataobject in Nd5)
    Hi
    We are migrating ND5 application into iplanet.When we are excuting
    data model it is giving invalid column name error but when we check
    data
    model sql statement which is working fine at sqlplus but this data model
    has got computed column.
    could you please tell me how to resolve this problem.
    Thanks
    Sn
    ----- Original Message -----
    From: "Todd Fast" <toddwork@c...>
    Date: Monday, July 9, 2001 10:42 pm
    Subject: Re: [iPlanet-JATO] COULD YOU PLS HELP ME TO FIX REPEATED OBJECT
    ISSUE.
    This code should generally be in the TiledView (it may appear in
    both the
    ViewBean and the TiledView after migration).
    CSpRepeated repeated =(CSpRepeated) event.getSource();// Assuming the code is in the TiledView
    TiledView repeated=this;
    CSpStaticText stFieldName =(CSpStaticText)
    repeated.getDisplayField("stFieldName");StaticTextField stFieldName=
    (StaticTextField)getDisplayField("stFieldName");
    - or -
    StaticTextField stFieldName=getStFieldName();
    int index = event.getRowIndex();In what event do you want to obtain the index? For the most part,
    you can
    just call TiledView.getTileIndex(). If this is a request event
    handlingmethod, the row index is part of the event signature.
    IMPORTANT: If the original code cached references to display
    fields as page
    instance variables for use in the class's events, you should NOT
    do the same
    in JATO. This was an ND pattern that is unnecessary and
    inefficient in
    JATO. Instead, you should just use the generated accessors or the
    getDisplayField()/getChild() methods to obtain a reference to a
    displayfield as needed. Mike Frisino has elaborated this point in
    the past; please
    refer to his emails.
    Todd
    [email protected]
    [email protected]

    Sn,
    Computed columns need special attention.
    The migration tool creates a QueryFieldSchema for each Model (DataObject).
    The schema is populated with entries for each of the "data fields" needed by
    that query. For example:
    FIELD_SCHEMA.addFieldDescriptor(
    new QueryFieldDescriptor(
    FIELD_NDNWORDERS_ORDERID,
    COLUMN_NDNWORDERS_ORDERID,
    QUALIFIED_COLUMN_NDNWORDERS_ORDERID,
    Integer.class,
    false,
    false,
    QueryFieldDescriptor.APPLICATION_INSERT_VALUE_SOURCE,
    QueryFieldDescriptor.ON_EMPTY_VALUE_EXCLUDE,
    The query field descriptor describes the "metadata" for a given field:
    QueryFieldDescriptor(java.lang.String logicalName, java.lang.String
    columnName, java.lang.String qualifiedColumnName, java.lang.Class
    fieldClass, boolean isKey, boolean isComputedField, int insertValueSource,
    java.lang.String insertFormula, int onEmptyValuePolicy, java.lang.String
    emptyFormula)
    There is also a SQL Template generated for select statements:
    static final String SELECT_SQL_TEMPLATE="SELECT ALL ndnwOrders.OrderID,
    ndnwOrders.CustomerID, ndnwOrders.EmployeeID, ndnwOrders.OrderDate,
    ndnwOrders.RequiredDate, ndnwOrders.ShippedDate, ndnwOrders.ShipVia,
    ndnwOrders.Freight, ndnwOrders.ShipName, ndnwOrders.ShipAddress,
    ndnwOrders.ShipCity, ndnwOrders.ShipRegion, ndnwOrders.ShipPostalCode,
    ndnwOrders.ShipCountry FROM ndnwOrders __WHERE__ ";
    public
    So for computed columns you need to do two things:
    1. You need to adjust the SELECT_SQL_TEMPLATE to reflect your computed
    column instead of the "simplistic" column name that appears there by
    default.
    2. You have to "alias" that column and provide the value of the alias in the
    columnName position of the QueryFieldDescriptor. This is needed so that the
    model will be able to dereference the computed column in the result set.
    ----- Original Message -----
    From: "SNR R" <snr@s...>
    Sent: Thursday, July 12, 2001 8:30 AM
    Subject: [iPlanet-JATO] Data model(Dataobject in Nd5)
    Hi
    We are migrating ND5 application into iplanet.When we are excuting
    data model it is giving invalid column name error but when we check
    data
    model sql statement which is working fine at sqlplus but this data model
    has got computed column.
    could you please tell me how to resolve this problem.
    Thanks
    Sn
    ----- Original Message -----
    From: "Todd Fast" <toddwork@c...>
    Date: Monday, July 9, 2001 10:42 pm
    Subject: Re: [iPlanet-JATO] COULD YOU PLS HELP ME TO FIX REPEATED OBJECT
    ISSUE.
    This code should generally be in the TiledView (it may appear in
    both the
    ViewBean and the TiledView after migration).
    CSpRepeated repeated =(CSpRepeated) event.getSource();// Assuming the code is in the TiledView
    TiledView repeated=this;
    CSpStaticText stFieldName =(CSpStaticText)
    repeated.getDisplayField("stFieldName");StaticTextField stFieldName=
    (StaticTextField)getDisplayField("stFieldName");
    - or -
    StaticTextField stFieldName=getStFieldName();
    int index = event.getRowIndex();In what event do you want to obtain the index? For the most part,
    you can
    just call TiledView.getTileIndex(). If this is a request event
    handlingmethod, the row index is part of the event signature.
    IMPORTANT: If the original code cached references to display
    fields as page
    instance variables for use in the class's events, you should NOT
    do the same
    in JATO. This was an ND pattern that is unnecessary and
    inefficient in
    JATO. Instead, you should just use the generated accessors or the
    getDisplayField()/getChild() methods to obtain a reference to a
    displayfield as needed. Mike Frisino has elaborated this point in
    the past; please
    refer to his emails.
    Todd
    [email protected]
    [email protected]

  • External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. ThisWorkBookDataModel

    Hi All,
    I have been trying to fix this for days now. I have tried solutions in many articles but to no avail. So while the error message is something you may have seen may times, I just can't find a solution in my case.
    This is the error:
    And in text just in case the image isn't viewable:
    "External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. We were unable to refresh one or more data connections in this workbook. The following connections failed to refresh: ThisWorkBookDataModel."
    What is worse is I have checked the ULS (SharePoint Trace Logs), the Event Viewer Logs and the OWA Logs and I cannot find a specific error that would help pin point the problem.
    Excel Workbook
    So what am I doing? I have an Excel 2013 workbook and I create an "SQL Server" connection to the AdventureWorksDW database, add a pivot table and a pivot chart, test in in Excel and all works fine.
    I save the Excel workbook to SharePoint 2013 and then select "Data" then "Refresh All Connections" and then I get the error in the picture above.
    Even more puzzling is I have another Excel workbook that also has pivot tables and pivot charts in the AdventureWorksDW2012Multidimensional cube database in "SQL Analysis Services" and this works fine. Hmmm.
    My Environment
    My environment is Windows 2008 R2 Server, SharePoint 2013 with the April Service Pack1 and a separate server with OWA2013 SP1. It has an SQL Server 2008 R2 database which has been upgraded to SQL Server 2012.
    Data Model Settings
    In Excel Services this is set to my server name which is "server-name". As I do not have instances all I can enter is the server name. As this works everywhere else including the workbook outside of SharePoint I do not think this is the problem.
    But I could be wrong.
    Unattended Account
    I have set this up for the PowerPivot Services App and Excel Services App.
    ODC Connections in Excel
    I have tried all 3 authentication modes, Windows, Secure Store ID and "None" which is the unattended account. I have not tried the other connection types, should I?
    Not in WOPI
    I am not in WOPI mode.
    AD Accounts
    I have added permissions in the SharePoint Services and SQL Server, and as they work in Excel outside of SharePoint, I do not think it is a permissions issue. I could be wrong of course, but the problem is in one of SharePoint, OWA, AD,
    SQL Server, Excel, and Windows Server.
    Isolate the Error
    Below is a list of errors I think are relevant but they do not tell me much. The SharePoint logs are not really giving me an error that tells me what to do and where to do it, or even why it cannot refresh, (perhaps not noticeable by the untrained eye).
    Problem with SQL Server Not Analysis Services
    So my cube database in analysis services works fine in SharePoint/OWA but not the databases in sql server. This is my best clue but I have no idea what it means. Why would it work with an Analysis Services connection but not an "SQL Server" connection?
    It Works Outside of SharePoint
    If I run the excel worksheet outside of SharePoint all works fine. When inside OWA this is where the refresh error occurs.
    Errors from Event Viewer on SharePoint Server using ULS Viewer
    "Failed to create an external connection or execute a query. Provider message: There are no servers available or actively being initialized., ConnectionName: , Workbook:"
    "Refresh failed for 'ThisWorkbookDataModel' in the workbook 'http://server...'. [Session: 1.V22.26itT0lx8piNFeqtuGVhN214.5.en-US5.en-US36.98c0e158-9113-46e9-850e-edda81d9ed1c1.A1.N User: 0#.w|ad\testuser1]"
    And an error in the ULS under the "Data Model" category:
    "--> Check Deployment Mode (server-name): Fail (Expected: SharePoint, Actual: Multidimensional)."
    This last error, as it turned out, defined the problem concisely, although I was yet to work out what it meant in some detail.

    I finally solved this myself (or should I say with the help of several key articles).
    The refresh did not work because the database was not in "SharePoint Mode". Yes, SQL Server has modes, 3 of them in fact.
    If you installed SharePoint to the default SQL instance which would be called <servername> then you cannot use this default instance for Excel 2013 workbooks in OWA 2013 because the refresh only works if the database is in SharePoint mode.
    So what are these 3 modes? The Deployment Mode property in the msmdsrv.ini file has them as:
    0 = Multidimensional mode (the default whenever you install SQL Server normally)
    1 = PowerPivot for SharePoint mode
    2 = Tabular mode mode
    How do you know what mode you are in? That's easy, open SQL Studio Manager and connect to all your SQL database engine instances (ignore Analysis Services or SSRS as they are not database engines). If you only have the default instance then that is almost
    definately in Multidimensional mode which is the default and what SharePoint installs its databases to.
    You must have an instance called <servername>\POWERPIVOT. This instance is the "sharepoint mode" needed, and the default instance name when you install an SQL instance in this mode.
    If you don't see <servername>\POWERPIVOT in SQL server then you are not in "sharepoint mode". It is more accurate to say, you do not have an instance that is in sharepoint mode. This is because you cannot simply switch modes on an SQL server.
    You have to install a new instance in the required mode, thats the only way.
    That's easy enough. Load up the SQL Server setup CD and run setup. Install a brand new instance and select "SQL Server PowerPivot for SharePoint" when you get there in the wizard.
    Now you will have the default instance that stores all the SharePoint databases and that is in mode 0, and a new instance called <servername>\POWERPIVOT that is in mode 1. The "<servername>\POWERPIVOT" instance connection is what you
    will use for Excel 2013 when rendering in OWA 2013.
    You also need to ensure OWA 2013 is not in WOPI mode for Excel worksheets. See the last link below for more information about WOPI.
    Next you should go to the Excel Service App in CA and click Data Model Settings and add the <servername>\POWERPIVOT instance.
    Then you have to either turn off the firewall on the SQL server machine, or create an inbound rule on the Windows firewall to open the TCP port for the <servername>\POWERPIVOT instance:
    1. Start Task Manager and then click Services to get the PID of the MSOLAP$InstanceName.
    2. Run netstat –ao –p TCP from the command line to view the TCP port information for that PID.
    Finally, you can now create Excel 2013 workbooks that run in OWA without refresh errors, as long as you are connecting to the <servername>\POWERPIVOT instance. Hooray.
    REFERENCES
    Look for the string "There are no servers available or actively being initialized" in this article:
    http://blogs.msdn.com/b/analysisservices/archive/2012/08/02/verifying-the-excel-services-configuration-for-powerpivot-in-sharepoint-2013.aspx
    Determine the server mode:
    http://msdn.microsoft.com/en-au/library/gg471594(v=sql.110).aspx
    Install the SharePoint PowerPivot instance (aka SharePoint mode)
    http://msdn.microsoft.com/en-au/library/eec38696-5e26-46fa-bc83-aa776f470ce8(v=sql.110)
    Open the port for the new SQL instance:
    http://msdn.microsoft.com/en-us/library/ms174937(v=sql.110).aspx
    Turn Off WOPI for Excel OWA
    http://blogs.technet.com/b/excel_services__powerpivot_for_sharepoint_support_blog/archive/2013/01/31/powerpivot-for-sharepoint-browser-refresh-fails-data-refresh-not-supported-in-office-web-apps.aspx

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Data modeling dilemma for EAV oriented problems in Data Modeler

    Hello,
    Dealing with EAV( entity attribute value model ) oriented structure of data.
    So this look like this:
    Entity( Entity_Id number , Entity_Name varchar2, Entity_Desc varchar2 )
    Entity that list attributes and some meta data on characteristics of attributes:
    Type_Of_Attribute( Attr_Id varchar2, Type_Of_Value TOV_Domain, Unit_Of_Value varchar2 , Min_Value variant_type , Max_Value variant_Type )
    Then we have actual data. Entity is described with set of attributes and their values. So aditionally to attributes on row form in Entity there are aditional attrbutes in columnar form.
    Because of sparsity...
    However there in columnar form the challenge or issue is type of values so domains of attributes.
    For example:
    weight_of_ person is number between min_number and max_number.
    But another parameter for example mood_of_person is string from the domain which consists of set of strings/descpriptions.
    Another possibility could be reference to some table of values ( key value ) that could be modelled as one:many relationship if put in entity Entity on row form of attributes.
    But since this is attribute relate only to few intances or it is very dispersed....and for preserving table form..it was put in columnar form ..
    Attribute_Of_Entity( Entity_Id, Attr_Id, value,
                                     -- when not normalized, could be add also unit like kg or lbs or inch or piece ).
    My question is on good/succesfull practice of modelling for VALUE in attribute_of_entity?
    Somewhere read that some databases have feature of so-called variant type.
    Guess the objective is to modell in such a way that implementation of this model is as easy as possbile in
    issues like:
    a) validating column oriented form during entering or updating values
    b) consolidating queries when reporting
    c) agregating data when grouping when grouping data and preventing non-comparable data.
    So to implement value as structure/complex_type with methods or there is any other feature supporting variabilty of the data along same column in the table. So logical design that would not cause too much complexity in the relational design and table implementation and procedures are handled as much as possible on the database level?
    Thank you in advance for  comments, experiences, suggestions,

    Hello,
    EAV is rarely a good solution. Tell us about your business problem and we might be able to show you solutions that are performant/easier to maintain/...
    https://www.simple-talk.com/opinion/opinion-pieces/bad-carma/
    Regards
    Marcus
    BTW: this question should not be asked in the forum space for the tool SQL Data Modeler. Instead ask in SQL and PL/SQL or General Database Discussions

  • Reporting Services (SSRS) connecting to a Tabular data model: losing formatting

    Hi all,
    we are developing a SSAS Tabular data model to enable users to connect to it via:
    - excel pivot table
    - power view reports
    - reporting services reports
    We invested a lot of time in correctly setting the format of the different columns directly in the data model, in order to have the same consistence format layout in all the reporting clients. For example currencies column with no decimals and with thousand
    separator, percentage with just 1 decimal and with the % symbol, date in short date format and so on.
    The same with KPIs: for every KPI we decided in the data model which representation to use
    When using excel pivot table or power view reports everything works ad expected.
    When using Reporting Services instead all the formatting chosend in the data model is gone and also the KPI representations. Why? Is there a way to have them also in SSRS?
    Thank you,
    Roberto

    Hi jdb09,
    Your issue can be caused by many factors, it can be caused by the connection string is not correct, please check to make sure the it is correct.
    Please check details information below to make sure you have done the correct connection and configuration first:
    Connecting and Configuring SQL Server Parallel Data Warehouse (PDW) Clients
    Similar theads for your reference:
    https://support.microsoft.com/kb/2750044/en-us?wa=wsignin1.0
    http://communities.bentley.com/products/assetwise/assetwise_platform/w/wiki/ssrs-report-server-mssqlserver-cannot-load-the-sqlpdw-extension
    If you still have any problem, please feel free to ask.
    Regards
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Dynamic table in Oracle report data model using laxical parameter is giving error Ora-00936: missing expression

    Hi ,
    I am using Oracle report 10G
    And trying to create report with dynamic table
    SELECT &COL1, &COL2
    FROM &TAB
    If I put this on data model it gives below error
    ORA-00936: missing expression
    ==> from
    Can anybody advise to solve this issue.
    Regards,
    Brajesh

    Look in the Reports Builder Help:
    If you want to use lexical references in your SELECT clause, you should create a separate lexical reference for each column you will substitute. In addition, you should assign an alias to each lexical reference.
    Does adding the column alias solve the problem?

  • Mapping data model to java classes and Interface.

    Need help in mappning my data model into java classes;
    Here are some of the details:
    Table, Poll:
    PollID int
    PollName varchar
    BusinessUnit varchar
    DisplaySchemeID int // reference to DisplayScheme table.
    Table, DisplayScheme
    DisplaySchemeID
    DisplaySchemeName
    etc
    Table, URL
    UrlID
    UrlName
    UrlDesc
    Table, PollURL
    PollURLID
    PollID
    UrlID
    PublishDate
    etc
    Table, Tag
    TagID
    TagName
    PollID
    So, a POLL is associate with a DisplayScheme.
    A Poll can have many URLs and a URL can be associate with many Polls , mapped in the PollURL table.
    and same is the situation with tags.
    Essentially I want to learn how to do one to many mappning and many to many mapping.
    Also to add, since I plan to use Ibatis for OR mapping.
    So I should a Parameter call to map for creaint a Poll. And to create a poll the form submitted will have:
    - PollName
    - URLs ( multiple )
    - one display scheme.
    - Tags ( many )
    So if you can show how to write a DAO createPoll method, then it would be great.
    Not sure if am asking a lot, but this would be a great example to move forward.

    The below is the sample DAO Class.
    For your case, you have to create your own PollBean with getters and setters and just pass the object to the method
    public int create(Connection con, ProjectBean projectBean) throws SQLException {
              int result = 0;
              String insertQuery = " INSERT INTO" +
                        " M_PROJECT(" +
                        " PROJECT_ID," +
                        " PROJECT_NAME," +
                        " PROJECT_DESCRIPTION) " +
                        " VALUES(?,?,?,?)";          
              initPrepareStmt(con, insertQuery);
              setString(projectBean.getProjectID());
              setString(projectBean.getProjectName());
              if(projectBean.getProjectDescription().equals("")){
                   setString(" ");
              }else{
                   setString(projectBean.getProjectDescription());
              result = executeUpdate();          
         return result;
         }

  • Problem with Data Model and Analysis View

    I create an analyze in BI Publisher and then i create a data model using this object.
    When i try to generate an XML with a number of rows the BI Publisher return an empty XML (Only with de DATA_DS tags but no data).   To bypass this problem i make and XML file by hand and this allow me to create reports and design it but when i try to view the reports i got the message that says "No Data Found".
    So i was check the analysis and all looks appears to be fine,  In the results tab it show me a complete table with the data i was looking to use.
    So i try to repeat the error and when  i try to create the XML for the Data Model i found this two error in the logs:
    [root@server ~]# [2013-07-17T16:37:22.844-04:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 2361] [userId: <anonymous>] [ecid: ad7bb40a72b553c0:-3e5f91c5:13ecd037992:-8000-00000000000e2b34,0] [APP: bipublisher#11.1.1] Incomplete xslt._XDONFSEPARATORS: decimal separator: null, grouping separator: null
    [2013-07-17T16:37:26.828-04:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 2361] [userId: <anonymous>] [ecid: ad7bb40a72b553c0:-3e5f91c5:13ecd037992:-8000-00000000000e2b3a,0] [APP: bipublisher#11.1.1] Incomplete xslt._XDONFSEPARATORS: decimal separator: null, grouping separator: null
    [2013-07-17T16:37:26.865-04:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 2361] [userId: <anonymous>] [ecid: ad7bb40a72b553c0:-3e5f91c5:13ecd037992:-8000-00000000000e2b3a,0] [APP: bipublisher#11.1.1] oracle.xdo.servlet.CreateException: Path: /FOLDER/MODEL.xdm is not pointing to a report. Actual type: ReportItem, sub-type: DataModel[[
            at oracle.xdo.servlet.ReportException.fillInStackTrace(ReportException.java:124)
            at java.lang.Throwable.<init>(Throwable.java:196)
            at java.lang.Exception.<init>(Exception.java:41)
            at oracle.xdo.servlet.ReportException.<init>(ReportException.java:36)
            at oracle.xdo.servlet.CreateException.<init>(CreateException.java:18)
            at oracle.xdo.servlet.ReportRepository.getReport(ReportRepository.java:104)
            at oracle.xdo.servlet.ReportRepository.getReport(ReportRepository.java:128)
            at oracle.xdo.servlet.dataengine.DataProcessorFactory.getDataModelPath(DataProcessorFactory.java:207)
            at oracle.xdo.servlet.dataengine.DataProcessorFactory.isSemanticLayerDataModel(DataProcessorFactory.java:99)
            at oracle.xdo.servlet.dataengine.DataProcessorFactory.isSemanticLayerDataModel(DataProcessorFactory.java:78)
            at oracle.xdo.servlet.ReportModelContextImpl.getReportXMLData(ReportModelContextImpl.java:157)
            at oracle.xdo.servlet.CoreProcessor.process(CoreProcessor.java:346)
            at oracle.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:101)
            at oracle.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:1074)
            at oracle.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:639)
            at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:492)
            at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:462)
            at oracle.xdo.servlet.XDOServlet.doGet(XDOServlet.java:280)
            at oracle.xdo.servlet.XDOServlet.doPost(XDOServlet.java:313)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
            at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
            at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
            at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.xdo.servlet.metadata.track.MostRecentFilter.doFilter(MostRecentFilter.java:64)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:125)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.xdo.servlet.init.InitCheckingFilter.doFilter(InitCheckingFilter.java:63)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
            at java.security.AccessController.doPrivileged(Native Method)
            at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:315)
            at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:442)
            at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
            at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
            at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:139)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
            at java.security.AccessController.doPrivileged(Native Method)
            at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:315)
            at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:442)
            at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
            at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
            at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
            at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
            at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
            at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
            at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
            at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    And when i try to view the report that use the analysis i got this two warning in the logs:
    [2013-07-17T16:58:01.615-04:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 57] [userId: <anonymous>] [ecid: ad7bb40a72b553c0:-3e5f91c5:13ecd037992:-8000-00000000000e2d7c,0] [APP: bipublisher#11.1.1] Incomplete xslt._XDONFSEPARATORS: decimal separator: null, grouping separator: null
    [2013-07-17T16:58:02.034-04:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 57] [userId: <anonymous>] [ecid: ad7bb40a72b553c0:-3e5f91c5:13ecd037992:-8000-00000000000e2d84,0] [APP: bipublisher#11.1.1] Incomplete xslt._XDONFSEPARATORS: decimal separator: null, grouping separator: null
    As i understand there is has a reference to a null value but i cant find what column of the analysis has the problem.
    Any ideas about how to solve or debug this?
    Thanks

    I follow your instructions and it works fine. I can create the XML and the reports can show data.
    So, i already know that there is no problem with the data and with the BI Publisher installation but i still doesn know what is the problem with the analysis view that fails.
    Any idea how to debug it?
    Thanks.

  • SQL data modeller -- how to create 1 to 1 relationship foreign key ?

    hi guys...
    i got 2 tables..
    table 1 - CFR
    CFR_ID = primary key
    table 2 - USER_PLAN
    USER_ID = primary key
    PLAN_ID,
    CFR_ID = foreign key reference (table 1)
    The business flows go like this..
    insdie CFR table, it contain all records / transactions of a particular user/plan. everytime a new transaction occurs for a plan/user, a new CFR_ID / row will be generated.
    after which, the newly generated CFR_ID for the new row , will be updated to the CFR_ID in USER_PLAN
    Thus, there is always a 1 to 1 relationship between the 2 table regardless how many CFR_ID is generated for a particular USER_PLAN. as the CFR_ID in the USEr_plan table will always be the latest one generated inside the CFR table.
    However, in the data modeller, i am unable to create such foreign key relationship... ANY idea how do i create a 1 on 1 foreign key relationship ? or there is no such way..
    Thanks and Best Regards,
    Noob

    Hi philips,
    Thanks for the wonderful reply..
    Just to double comfirm with you,
    even if i had set a unique constraint on CFR_ID(foreign key column), inside the relationship model, the relationship between the foreign key is still showing as a 1:m relationship right ?
    just that a character 'U' will appear beside the CFR_ID column.
    However the diagraphm is still showing a 1:M relationship.
    is this correct ?
    Regards,
    Noob

Maybe you are looking for

  • Unable to connect with new Data Source

    Hi everybody, After many developpment, I finally get my class to do what we need with crystal and webi report using Rebean and RAS SDK's. But, today we found a special case that I was thinking to solve following examples but the solutions given by th

  • New ipod touch won't connect to itunes

    My new iPod touch won't connect to itunes on my iMac.  I have several devices already connected to this itunes library, any suggestions?  Thanks.

  • XML to Itab - I've tried XSLT

    Hi guys, Before you guys go omg why didnt I search well I did. Hear me out. I have a XML format which looks like this <?xml version="1.0"?> <SndPmtResp>     <Tranx Id=u201D0000000001u201D type= u201CVPu201D status=u201D1u201D/> </SndPmtResp I did a q

  • Share folders

    hello, i want to share folders with my pc. i opened the file sharing, but i only have access from my pc to the one folder not all the other subfolders inside. Because i have a lot of subfolders i cannot share them all. is there an other way? i also t

  • [SOLVED] Broken Gwibber/couchdb after Apr 9 upgrade

    After a 1.3GB upgrade (2.6.33, ssl, gtk, ssh...) I can no longer use Gwibber. I use the following components: aur/couchdb 0.11.0-1 aur/python-couchdb 0.6.1-1 aur/desktopcouch 0.6.3-3 aur/gwibber-new 2.29.94-1 When I start gwibber I get: Traceback (mo