Project universe measure as "Sum" instead of database delegated

All,
We are on BO XI3.1. I have a Cube universe which is created automatically on top of a MSAS 2008 cube.
What we find is that all measures by default are set to " Database Delegated" for aggregation projections. There is no way to change this during universe creation.
These are my questions.
1. I am wondering if there is an automatic way to change this option for all measures to "Sum"?
2. Could this be done using designer SDK?
3. Is there a performance gain against a cube universe for "Database Delegated" vs "Sum"
Regards
Edited by: ramaks on Mar 2, 2010 9:21 PM

Hi,
There is an SDK application that does the jobe including a lot of additional features. You can dowload it [here|https://wiki.wdf.sap.corp/wiki/download/attachments/530876999/Designer_SDK_Applis_XI3.1_SP2.zip]:
There are also more SDK applications available [here|https://wiki.wdf.sap.corp/wiki/display/globalregions/AddvaluetoyourUniverseswithDesignerSDKapplications].
Having delegated measures mean that the measures cannot be aggregataed locally in WebIntellligence. Here are some examples of measures that MUST be delegated: ratio, weight, average, percentage, etc.
In fact all non additive measures MUST be delegated.
Concerning the performance, each time the user need to have an aggrgated in the document where a delegated measure is implied, an additional query is added to the existing one (groupin sets) and the whole query is sent to the database.
That might have in some cases some performance impact.
But I  say it again, delegated measures are mandatory for non additive measures and it it the most of time valid for OLAP sources.
Regards
Didier

Similar Messages

  • Measure aggregate sum not summing

    Hi
    I have many universes, and for all universes I have set measures to sum, where appropriate. However, this seems to work sometimes, and not others. I can't work out when or why it does or doesn't! Does anyone have experience of this?
    Thanks

    Hi Louise,
    Let me try to explain in a bit more detail...
    The a) aggregation is done by the group by clause in the actual select that is sent to the database,
    so it differs depending on what the user selects when building the query.
    But it will do this using the aggregation function you specified in the select.
    So if you have
    measure object sales sum(sales.amount)
    dimension objects country country.id and store store.id
    Then the select for a query containting the two objects sales and country would have a group by country.id
    The select for the query containing all three objects would have a group by on country.id,store.id
    You can verify the actual group by by looking at the generated SQL in the query builder.
    The b) aggregation is done according to what is specified in the advanced tab.
    You see this tab when you are in Designer and open the object properties dialogue.
    Its the second tab in it. (first is the select and where box, second it the aggregation (top part) and list of values (bottom part).
    So say you selected all three objects and then created a table with only country and sales, the sales figures (that were queried per store, per country) will be aggregated according to that 2nd tab to the country level.
    Note that when you use an aggregation function in the select, Designer will make a best quess as to the aggregation wanted and pre-fill it. So for the sum() function, it will also use sum as aggregation in 2nd tab.
    Hope this helps
    P.S. looking at the actual generated SQL will also help you determine if you have more that one select done to fullfill you dataprovider. If you have more than one, it is only acceptable if the differences are in measures only. If not, you will get more than one cube back from your query, without any control over how they are synchronized. If so you can try multicube to get the results you would expect (i.e. seen as one big cube i.s.o. more than one).

  • What are the Benefits of Synchronizing/ Replicating Project Online with an on-Premise SQL Database

    Hi
    What are some of the reasons and/ or benefits that one may want to consider with regards to setting up and synchronizing Project Online with a on-Premise SQL database ?
    \Spiro Theopoulos PMP, MCITP. Montreal, QC (Canada)

    Spiro,
    Reporting via Project Online is limited to OData, and writing queries with OData starts getting complex at a certain point.
    So, if your organization in using third party BI tools, or want to use OLAP cubes, or in general want to use straight SQL queries instead of oData, it is generally beneficial to have the data in house for reporting.
    So in general, using the SQL Synchronization, will bring data into "your" organization, as opposed to being with Microsoft so you get a LOT more flexibility with reporting.
    Hope this helps your question.
    Cheers,
    Prasanna Adavi, Project MVP
    Blog:
      Podcast:
       Twitter:   
    LinkedIn:
      

  • Create Project Web App Instance - Failed to find Project Site object. Are there any database mounted?

    Hi
    New Installation. Sharepoint 2013, Project Server 2013 and SQL Server 2008 SP1 (another box).
    When I try to create Project Web App Instance, I get 
    Provisioning 'PWA': Post provisioning setup failed. Exception 'Microsoft.SharePoint.SPException: Failed to find Project Site object. Are there any database mounted?
     Followed by:
    Failed to provision site PWA with error: Microsoft.SharePoint.SPException: Failed to find Project Site object. Are there any database mounted?
    Already tried reinstalling the whole farm to no avail. Farm Admin account is used to do everything within this process.
    Thanks in anticipation.

    Hi
    I got the same error when provisioning the PWA
    Log Name:      Application
    Source:        Microsoft-SharePoint Products-Project Server
    Date:          8/29/2013 11:58:12 AM
    Event ID:      6966
    Task Category: Provisioning
    Level:         Error
    Keywords:     
    User:          abc\administrator
    Computer:      mycomputer
    Description:
    Provisioning 'PWA': Post provisioning setup failed. Exception 'Microsoft.SharePoint.SPException: Failed to find Project Site object. Are there any database mounted?
       at Microsoft.Office.Project.Server.Administration.ProvisionFeatureEventHandler.FeatureActivated(SPFeatureReceiverProperties properties)
       at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce)
       at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags activateFlags, Boolean fForce)
       at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly)
       at Microsoft.SharePoint.SPFeatureCollection.AddInternalWithName(Guid featureId, Int32 compatibilityLevel, String featureName, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean
    fMarkOnly, SPFeatureDefinitionScope featdefScope)
       at Microsoft.SharePoint.SPFeatureCollection.AddInternal(Guid featureId, Version version, SPFeaturePropertyCollection properties, Boolean force, Boolean fMarkOnly, SPFeatureDefinitionScope featdefScope)
       at Microsoft.SharePoint.SPFeatureCollection.Add(Guid featureId)
       at Microsoft.Office.Project.Server.Administration.PsiServiceApplication.EnsureProvisioningFeature(SPSite site)
       at Microsoft.Office.Project.Server.Administration.PsiServiceApplication.CreateSite(ProjectProvisionSettings provset)'.
    Later found that the service application where I was trying to provision the PWA, it was not associated with the web-application.

  • Regular measures(measures with SUM function) are not working along Distinct count measures

    Hi All,
    I am creating a cube that got to have a distinct count measure and a sum measure. if i have created only sum measure then it is working fine. if i create both measures and process the cube only distinct count measure is populated. the sum measure is showing
    all blank values. i am using 2008 R2, and creating 2 different measure groups for both measures, after i include the distinct count measure the sum measure becoming null. can you please help me with this? i am breaking my head for last 2 days on this.. Thank
    You

    Ramesh, measures are affected by the context of the queries that contain them, for example and in some cases, you can get a different total count of something by two different queries, this is because the context of the first query is different than
    the second one ... keep this in mind.
    Now, I've noticed that you are "creating 2 different measure
    GROUPS for both measures", and i guess that you are trying to view those two measures _which are from different measure
    groups_ at the same time and in the same report.
    considering the info in the first point and as you are create the calculated measures in two different measure
    groups, I'm not sure but i guess that this is the problem, and i suggest you create those two calculated measures
    in the same measure group, then try to view them again and let's see.
    if the previous point didn't solve it, please post the expressions you are using to create the calculated measures, maybe this will help in finding the problem.  

  • When / why use XML to store data instead of database table ?

    Hi All,
    I still not use XML much in applications and don't know much about its utilization.
    I read here and there about storing data as XML instead of into database tables.
    - could any body please tell me when / why use XML to store data instead of database table ?
    e.g : store inventory per warehouse in XML format. ?
    - What is the other cases or reasons of extracting database records into XML or vice versa ?
    - is there any good pdf on this ?
    Thank you for your help,
    xtanto

    It depends entirely what you want to accomplish with the 'XML in the database'. There are basically 3 independent methods: As CLOB, as XMLType views or as native XMLType 'columns'
    Each method has advantages and disadvantages, especially in the performance vs purpose tradeoff.
    The Oracled Press book "Oracle Database 10g XML & SQL Design, Build, & Manage XML Applications in Java, C, C++, & PL/SQL" is highly recommended for anyone interested in Oracle and XML. http://books.mcgraw-hill.com/getbook.php?isbn=0072229527&template=oraclepress

  • How do I start project on measure 1?

    I'd like to start my project on measure 1, not on measure 0.
    I see that I can drag to the left the white-outlined box on the very left,
    thereby havng the project start a number of measures before 0, but
    I can't drag it to the right after 0.
    Hope I'm missing something obvious.
    Thanks.
    david

    Well, I was missing something obvious. 
    It often happens this way:  post a question and suddenly
    then answer reveals itself.

  • Median function in SSAS is doing a sum instead

    Hi, i am new to SSAS and mdx. I'm trying to get the median, however the function seems to return the sum of all record for that field/column group.
    For example, this is the fact table that i have:
    DEPT QUARTER
    GROUP TIME
    1 2012Q1
    A 1
    1 2012Q1
    A 5
    1 2012Q1
    A 20
    1 2012Q1
    A 30
    1 2012Q1
    B 10
    1 2012Q1
    B 15
    1 2012Q1
    B 20
    1 2012Q1
    B 31
    1 2012Q2
    A 8
    1 2012Q2
    A 10
    The result should be:
    Median
    DEPT QUARTER
    GROUP MEDIANTIME
    1 2012Q1
    A 12.5
    1 2012Q1
    B 17.5
    1 2012Q2
    A 9
    instead it's showing :
    DEPT QUARTER
    GROUP MEDIANTIME
    All All
    A 150
    All All
    B 150
    All 2012Q1
    A 132
    All 2012Q1
    B 132
    All 2012Q2
    A 18
    etc.
    The mdx is :
    With MEMBER  [Measures].[Median] AS Median([DIM GROUP].[GROUP].Members,
            [Measures].[TIME])
    SELECT  {[Measures].[Median]} ON 0,
            NonEmpty([DIM DEPT].[DEPT].MEMBERS)*NonEmpty([Dim DATE].[Quarter].MEMBERS)*NonEmpty([DIM GROUP].[GROUP].Members) ON 1
    FROM    [TIME CUBE]
    Can someone please advise how to fix this? to get the median value instead of the sum of each group?
    2 more questions that arose:
    1. When i searched on the mdx syntax, some people recommend to have the [GROUP].[GROUP] repeated twice, can someone please explain the difference between [DIM GROUP].[GROUP].[GROUP].Members AND [DIM GROUP].[GROUP].Members?
    2. Why do the "all" show up in the result, can we exclude it?
    Thanks a lot in advance.
    --Elys

    Hi Elys ,
    You should use the 'EXISTING' functionality .
    With MEMBER [Measures].[Median] AS Median(EXISTING([DIM GROUP].[GROUP].Members),
    [Measures].[TIME])
    SELECT {[Measures].[Median]} ON 0,
    NonEmpty([DIM DEPT].[DEPT].MEMBERS)*NonEmpty([Dim DATE].[Quarter].MEMBERS)*NonEmpty([DIM GROUP].[GROUP].Members) ON 1
    FROM [TIME CUBE]
    And for the 'All' member : If you have enabled 'AttributeHierarchyEnabled' property of GROUP to true (default), you will have an all member . Using [Dim Group].[Group].[Group] will exclude the 'All' member and will present only the actual groups .
    Regards, David .

  • How to store measurement data in a single database row

    I have to store time-data series into a database and do some analysis using Matlab later. Therefore the question might be more a database question rather than Diadem itself. Nevertheless I'm interested if anyone has some best practices for this issue.
    I have a database which holds lifecycle records for certain components of same nature and different variants. Depending on the variant I have test setups which record certain data. There is a common set of component properties and a varying number of environmental measurements to be included. Also the duration of data acquisition varies across the variants.
    Therefore having tables appears to be non-optimal solution for storing the data because the needed number of columns is unknown. Additionally I cannot create individual tables for each sample of a variant. This would produce to many tables.
    So there are different approaches I have thought of:
    Saving the TDM and TDX files as text respectively as BLOB
    This makes it necessary to use intermediate files.
    Saving the data as XML text
    I don’t know yet if I can process XML data in Matlab
    Has anybody an advice on that problem?
    Regards
    Chris

    Chris
    Sorry for the lateness in replying to your post. 
    I have done quite a bit of using a Database to store test results.  (In my case this was Oracle DB, using a system called ASAM-ODS)
    My 2 Cents:
    Three functions were needed by users for me.  1) To search and find the tests,  and  2)  To take the list of Tests and process the data into a report/output summary. 2) If the file size is large, then being able to import the data quickly into analysis tool speeds up processing.
    1) Searching for test results.  This all depends on what parameters are of value for searching.  In practice this is a smaller list of values(usually under 20), but I have had great difficulty getting users to agree on what these parameters are. They tend to want to search for anything.   The organization of the searching parameters has direct relationship to how you can search.   The NI Datafinder does a nice job of searching for parameters, so long as the parameter data are stored in properties of Channel Groups or Channels. It does not search or index if the values are in channel values.
    Another note: Given these are searchable parameters, it helps greatly if these parameters have a controlled entry, so that the parameters are consistent over all tests, and not dependent on free form entry by each operator. Searching becomes impossible if the operators enter dates/ names in wildly different formats.
    2) Similar issue exists if put values into databases. (I will use the database terms of Table and column(Parameter) and Row (instance of Data that would be one test record.)
    The sql select statement, can do fast finds, if store the searchable parameters in rows of a table. Where would have one row for each Test Record.   The files I worked with have more than 2000 parameters.   Making a table that would store all of these, and be able to search for all of these, makes a very large number of rows. I did not like this approach, as it has substantial maintenance time, and when changes are made, things get ugly quick.
    3)This is where having the file format be something that the analysis tool can quickly load up is beneficial.   Especially if the data files are large. In DIAdem's case, it reads TDM,TDMS files very quickly into the dataportal.   It can also read in the MDF or HDF files, but these are hierarchical structures that require custom code to traverse the information, and get the information into dataportal to use in Analysis /reporting. (Takes more time to read data, but have much more flexibility in the data structure than the two level tdm/tdms format.)
    My personal preferences
    I would not want to put the test data into a Table row. Each of the columns would be fixed and the table would be very wide in columns.
    >
    I personally like to put the test Data into a file, like TDMS, MDF, or HDF and then the database would have a entry for the reference to the attachment. The values that are in the database is just the parameters that are used for test Searching, either in datafinder or in sql commands in the user interface.
    Hopefully these comments help your tasks some.
    Respectfully,
    Paul
    tdmsmith.com

  • How to use JNDI datasource instead of database connection settings JDev 10g

    Hi,
    In order to use the different database from other environments, we are not able to use the JNDI datasource configuration settings, all the time need to configure the database connection settings from JDeveloper by changing the database connectivity settings in the JDeveloper for each environment separately, need a solution on how to make the database connectivity unique using the JNDI datasource name for all the environments for database connectivity settings through the application server console rather than changing the database adapter configuration in JDeveloper.
    Please provide the update at the earliest. Your help is greatly appreciated. Thanks in advance..

    What are you not clear on?
    What you need to do is get your developers to conform to a database naming standard, as stated above, so if you have an oracle database that is for eBusiness Suite you get all developers to create a DB connection in JDev called, ora_esb as an example.
    When the developer creates a DB adapter this will create a JNDI name of eis/DB/ora_ebs. When the BPEL project is deployed it looks for the JNDI name in the oc4j-ra.xml file to see its connection details. If they don't exist then they use the developers connection details. The issue with this is that they generally always point to the development DB. It is best practice for the developers to remove the mcf settings in the DB adapter WSDL. This way if the JNDI name has not been configured it will fail.
    So when you migrate from dev-test-prod what you have is the JNDI name eis/DB/ora_ebs. The dev points to the dev instance of ebs, test points to the test instance and so on. This means that you don't need to adjust any code in the BPEL projects.
    cheers
    James

  • Universe on a SAP R/3 Database

    I know it is not possible to build a Universe on a R/3, but this is possible with Crystal Reports.
    Let's imagine we have a R/3 system running on a Oracle Database. It is possible to create a ADBC connection to the SAPR3 database of this system, and build an Universe using this connection?
    I've already read that maybe cluster tables couldn't be read... but what happens if we build a view from this cluster table?
    Thanks for your responses.

    Hi All,
    Looks like Cherry is on to something, but this is an additionally licensed product? It is released for 4.6 (DMS 2005 is supporting R/3 4.6C, 4.7/110, 4.7/200, and ERP04 (ECC5.0)) - see the FAQ link on the page Cherry offered.
    I'm not sure what this costs - this is new information to me - thanks Cherry!.
    ASRao, if you've only deleted the client dependent data you stated - master and transaction - you could also get away with a client export of PRD, but I'm not aware of a way to delimit the time frame with the given SAP CC profiles. This would be faster than a system refresh from PRD to QA, but you need to be sure that all you want to do is refresh the client dependent components.
    Good luck.
    Tim

  • Can not connect the database via query string (C#). The SQL Server Management Studio is showing the full path of a location instead of database name.

    I can not connect to the database via C#. The database is showing full path of the database file instead of the database name. See the pic: a database showing only name 'emart' and the other database showing the full path.
    : Robby

    Hi,
    According to your post, I know that the database name is showing file path of the database rather than the database name in SQL Server Management Studio. You were not able to
    establish a connection to the database using C#.
    As Olaf said, was any error message thrown out when the connection failed? How did you create these databases?
    You can use the following T-SQL to attach the database and see if the issue persists.
    CREATE DATABASE
    databasename
        ON (FILENAME = 'filepath _Data.mdf'),
        (FILENAME = 'filepath_Log.ldf')
    FOR ATTACH;
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Can I use PA period date for Project Functional currency exchange rate instead of EI Date?

    Hi,
    We are planning to enable project currency for our projects. However while testing we have encountered one challange. We want to use PA date for all our currency conversions. However Oracle uses EI date to convert project functional currency transactions.
    Does anybody know any setup or solution to convert all expenditure item transactions based upon PA date instead of EI date? Help / Solutiion on this regard is highly appriciated.
    Thanks & Regards,

    Hi,
    Do u mean you have already done your Trx in EI Date and now u want to use PA Period Ending Date?
    Please Correct me if i'm wrong, u want to update all the Trx from EI Date to PA Period Ending Date?
    If you update from EI Date to PA Period Ending Date, this will only be applicable to Future Trx's not for the one's already incurred.
    Regards
    Ragahavender K

  • Steps to create Universe from Microsoft SQL oder Oracle Database

    Hi All,
    i am looking for a document which describe the steps to create an Universe base on Microsoft SQL oder oracle database.
    Thank you in advance
    Pat

    Hi,
    These links may be help for you:
    http://www.bidw.org/business-objects/universe-design/getting-started-with-universe-design/
    http://www.sap-businessobjects.info/universe-design.htm
    All the Best,
    Madhu...

  • Database Project Schema Compare Source Control to target database Not working

    I am trying to use the schema compare feature of database project / TFS.  I don't understand why it is trying to DELETE a stored procedure that exists in TFS.
    The Degenerate Dimension

    Hi MM,  
    Thanks for  your post.
    If you open this solution from a local folder(not stored in TFS), and try to compare that same file, what’s the result? We need confirm this issue relate to VS Schema Compare tool or relate to TFS Source Control first.
    http://blogs.msdn.com/b/ssdt/archive/2012/03/23/schema-compare-improvements.aspx
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for