Essbase RPD question - 2 columns

We build an Essbase BSO cube with historical data. Linked it to OBI (nothing special in the RPD done).
Then we put a filter onto a column: FILTER("FinHist"."FinHist - measure" USING ("Years#1"."Years - Default" = '2006'))
This works.
BUT when we add a second column with e.g. FILTER("FinHist"."FinHist - measure" USING ("Years#1"."Years - Default" = '2005'))
then we get no results at all  - even though both FILTERS alone give a result.
(we are on OBIEE 11.1.1.6.7 and Essbase 11.1.2.1)

In Oracle BI you'd typically fix this problem by creating a join between the two datasources using Process Key as the key. Can you do something similar with GRC?

Similar Messages

  • Rpd questions.

    Hi Friends,
    As I am working on OBIEE and getting to know more about it, I am having more and more questions. These might be small but I need to get a clear concept of this.
    1. When I pull just dimension columns in 'Answers' and run the query, and when I see the sql in the log file, I see that the fact table is also joined. Does this always happen? Is this the way it is? In my log files for the first few queries I ran I see that only dimension tables are joined in the sql. Not sure why? Could someone please clarify...is the fact table a must join when we select just dimensions in the report.
    2. Can we absolutely not do a sum function on the dimesion measure? Will it give incorrect results if we do that. So the sum function would work correctly only for fact measures? Please clarify.

    +1. When I pull just dimension columns in 'Answers' and run the query, and when I see the sql in the log file, I see that the fact table is also joined. Does this always happen? Is this the way it is? In my log files for the first few queries I ran I see that only dimension tables are joined in the sql. Not sure why? Could someone please clarify...is the fact table a must join when we select just dimensions in the report.+
    At the RPD level generally we use to join different dimension tables with a single fact table.so,even though if we pull dimensions from different dimension physical tables it should satisfy all the join conditions of these dimension tables with the common fact table.so,fact table was also added in the sql along with the dimensions.
    +2. Can we absolutely not do a sum function on the dimesion measure? Will it give incorrect results if we do that. So the sum function would work correctly only for fact measures? Please clarify.+
    It is possible to do a sum function on the dimension measure also but the thing is the data type of the dimension measure should be numeric.

  • How to use different essbase member combinations in columns in obiee

    Hello,
    I am trying to build a report in obiee which has an essbase ASO cube in the background. We have a measure 'A' that exists in BegBalance and a measure 'B' that exists in months Jan through Dec. So in my criteria, i set the dimensions i need along with "Periods - Default" and in the selection pane set the condition to keep only the months and begbalance. I also set the pivot so that the periods and the measures show up in columns.
    When i run the report, in the columns for both measures, i get begbalance and the 12 months.
    However, i only want measure A to show with begbalance and measure B to show with the 12 months. How can i accomplish this?]
    Any help will be much appreciated.
    Thanks.

    Hi,
    Yeah, that is there. I have not copied that code in my post...
    try {
                dataTable1Model.setObject(1,this.getSessionBean1().getTeamName());
                dataTable1Model.setObject(2, this.getSessionBean1().getFis_Year());
                dataTable1Model.execute();
            } catch (Exception x) {
                teamRowSet.close();Thanks

  • Interview questions on column alias

    Hi
    Interviewer asked me this question.
    Why do we need an alias for column and is it improving query performance?
    I said
    if we select columns with same name from different table, alias is useful but no way related to query performance.
    is it correct? Please explain me on alias topic

    925896 wrote:
    Hi
    Interviewer asked me this question.
    Why do we need an alias for column and is it improving query performance?
    I said
    if we select columns with same name from different table, alias is useful but no way related to query performance.
    is it correct? Please explain me on alias topicYou are correct, alias is not related to performance..

  • Essbase export question

    I have the strangest error.
    I am trying to export Level 0 and import in our new enviornement.
    Versions for target and source: 11.1.2.2
    My method of export:*
    Right click [database name] --> Export
    Export to file [ name.txt]
    Export option: Level0 data blocks
    Export in column format
    Expected output.
    Header: Begbalance, jan -------> Dec, Period
    This output
    Header: "HSP_InputValue" "HSP_InputCurrency" "HSP_Rate_USD" "HSP_Rate_RMB" "HSP_Rate_CNY" "HSP_Rate_PLN" "HSP_Rate_EUR" "HSP_Rate_GBP" "HSP_Rate_MXP"
    Now I have exported and imported other applications successfully and all of them include HSP_Rate dimension.
    other info if you need:
    These are EPMA history applications of from 11.1.1.3.
    I am creating new BSO applications in our OOD 11.1.2.2 and importing the data (yes i am not migrating the planning app itself, just migrating the data into this bso)
    please let me know what you think.
    I have even tried restructuring the outline since HSP was the first dimension, i dont know why that was the first. But either way I relocated it in essbase. (did not do it in planning because it was deleted long before. Only essbase app is left. )
    Edited by: 997328 on Jun 7, 2013 2:05 PM
    Edited by: 997328 on Jun 7, 2013 2:24 PM

    997328 wrote:
    Thanks for pointing out the "error" in the subject i changed it.
    Yes I have tried it.
    The thing is, in the output file there is no way to determine which member of the HSP_Rate dimension belongs where in the data. That being said there is no way the load rule will validate either. Here see below.
    HSP_InputValue     HSP_InputCurrency     HSP_Rate_USD     HSP_Rate_RMB     HSP_Rate_CNY     HSP_Rate_PLN     HSP_Rate_EUR     HSP_Rate_GBP     HSP_Rate_MXP     HSP_Rate_INR     HSP_Rate_THB     HSP_Rates
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx - CC10     4210C     100           
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4210M     1000          
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4250M     -100     
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4299M     -132          
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4501M     0          
    Thats just first few lines of the data. But how can it tell. You see what I mean??
    Edited by: 997328 on Jun 7, 2013 2:23 PMActually there is a way to know what value is what member. The line
    HSP_InputValue     HSP_InputCurrency     HSP_Rate_USD     HSP_Rate_RMB     HSP_Rate_CNY     HSP_Rate_PLN     HSP_Rate_EUR     HSP_Rate_GBP     HSP_Rate_MXP     HSP_Rate_INR     HSP_Rate_THB     HSP_RatesIs a listing of the data values in order. It would start with the first numeric column after the members. from the look of the sample data it looks like these are all hsp_inputvalue. scroll through the file or import it to excel and look at the columns to see if any of the other columns have numeric values.
    To be on the safe side, create a dummy file that has the dimension names like
    Period Years ???? Version Currency entity ????? ????? ?????? then all of the hsp values and use that to build your load rule, that way if there is a row 20000 rows down that has more than hsp_inputvalue, you won't get an error when trying to load the file

  • Essbase MDX question

    Hi there
    I am at a client where I have to do comparative store analysis. Basically with MDX I need to do the following:
    When doing comparable store analysis on a daily level in my Time dimension a store must have traded a year ago on the same comparable day as per the company's trading calendar.
    When doing comparable store analysis on a weekly level a store must have traded a year ago for the full same comparable previous year's week (i.e. all the days in the week) as per company's trading calendar.
    If the criteria is not met #missing must be returned else the day or week value.
    On day level my MDX works fine:
    CASE
    WHEN
    IsLevel([Entity].CurrentMember, 0) AND
    IsLevel([Time].CurrentMember, 0) AND
    NOT IsEmpty([Periodicity].[LY])
    THEN
    ([LY], [Act], [Time].CurrentMember, [Entity].CurrentMember, [Account].CurrentMember)
    END
    The question is on a week level how do I check if a store traded on every single day of the same week last year. Getting the week is no problem, but checking every child of that week.
    Just another point on this: I need to do this on month, quarter etc levels as well, so hard coding to check from day 1 to 7 will not suffice.
    Thanks
    Johan

    You are correct, currently Essbase does not support updates via MDX.

  • RPD Merging 'Decision' column help

    Hi,
    I’m new to RPD merging; I’m merging very big RPD from the two different repositories.
    While merging, we are changing some attributes as ‘Modified or Current ‘in the ‘Decision’. I’ve changed few things in the modified RPD as a part of requirement. Rest of the things as ‘Current’. Here I’m in the position of change 1000 of attributes as ‘Current’ in ‘Decision’ columns :'( . It’s bugging me. Are there any short cuts to do this very simple? Any guide will be much appreciated
    Thanks in advance.
    Viruu

    Unfortunately, this is a manual process. When you are merging and there are conflicts you have to go through them one by one.

  • Essbase Newbie Question

    Hi all,
    I've downloaded/installed Essbase 9.3.1 (server, client, admin/shared/integrative/provider srvcs). Was able to get all configured and running properly. Created a very simple BS app/cube with 5 dimensions (Year/Accounts/Market/Product/Scenario). Was able to load some data via rules file without any major hiccups. WHEW! (Roskey's book has been a big help).
    My problem is that the data I loaded is not matching the figures in the .txt file. Under my Senario Demension, I have an Actual member that represents 'Revenue'. When previewing the data, or accessing via excel, the amounts are much smaller, and appear to be unique counts instead of a sum of the revenue figures. I assume this has to be some type of setting/config of the Outline as the only calc I've run is the default.
    Would apprieciate any advise someone can offer a newbie. Thanks.

    one of the biggest downsides to add to existing values is that if the load fails in the middle or you need to reload data, you have to figure out how to get to the state before loading (restoring backup, loading data to a holding area that gets cleared first, etc). That said, sometimes it's necessary to to use add to existing values, especially if you had an extract that has a column that is not being used.

  • Db_stat output question (last column value?)

    Hi,
    Searched everywhere, can not find this mentioned anywhere. What are the meanings of the columns in the db_stat -C output, for example:
    Locks grouped by lockers:
    Locker Mode Count Status \--------------- Object \------------
    1 dd= 0 locks held 2 write locks 0 pid/thread 20910/1121282384
    1 READ 1 HELD 840055011:840055011 handle 2
    1 READ 1 HELD 840055011:840055011 handle 0
    4 dd= 0 locks held 0 write locks 0 pid/thread 20910/1121282384
    5 dd= 0 locks held 2 write locks 0 pid/thread 20910/1121282384
    5 READ 1 HELD 840055038:840055038 handle 2
    5 READ 1 HELD 840055038:840055038 handle 0
    I take this to indicate:
    Locker 1 has two READ locks on 840055011, which is an open handle
    Locker 4 had been declared, but is now empty, awaiting its call to duty
    Locker 5 has two READ locks on 840055038, again, an open handle
    The pid and thread information are self-describing I believe.
    My questions are, what would the difference be between the two different READ locks in, say, locker 1, one line ends w/ 2, the other w/ 0. Is this a problem in my code, could I possibly have opened the same handle twice?
    Additionally, I know the database handle is being named 840055011 but is listed as "840055011:840055011", is this expected, or again, might I have an issue. I'm fairly sure this is simply naming and probably arbitrary.
    If there is a source for the db_stat output that I have missed, by all means, please let me know, I've tried my best to RTFM, but couldn't find many M's.
    FYI: transactional environment using the Java binding (not JE)
    Thank you

    Hi,
    brianmckenna wrote:
    Searched everywhere, can not find this mentioned anywhere. What are the meanings of the columns in the db_stat -C output...I think this should answer your questions:
    Deadlock debugging - http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/lock_deaddbg.html
    Thanks,
    Bogdan Coman

  • Questions on Column Fromatting example.

    Hi Tech Gurus,
    I am trying to implement "Column Formatting" example from Oracle XML Publisher - User Guide(PartNo: B13817-03) pdf. It is working fine and could able to get similar kind of output.
    When all the columns are displayed, the table in output occupies certain amount of size in the page. When I try to hide one column out of three, the width/size of the table is coming down. Size of the table in second output is not big as in the first output.
    My requirement is, table size should not vary accordning to the number of columns. The table size should be the same and two columns should share the total table length equally or as per the giving settings.
    How I can achieve this.
    Need urgent help on this.
    Regards,
    Chandra.

    925896 wrote:
    Hi
    Interviewer asked me this question.
    Why do we need an alias for column and is it improving query performance?
    I said
    if we select columns with same name from different table, alias is useful but no way related to query performance.
    is it correct? Please explain me on alias topicYou are correct, alias is not related to performance..

  • Essbase Storage Questions

    Hi,
    I would like to know during calculation how essbase read and write data to disks,
    whether it use synchronous IO or asynchronous IO.
    And also the block size of data written to the disk.
    I know in BSO, we can calculate Essbase block size, however
    as this question is coming from storage / infrastructure people,
    what they mean probably is for each write to disk,
    what's the size of data / block written to the disk? How do we measure that?
    This question come up as we would like to tune essbase on Solaris with
    zfs file system. Any feedback is appreciated.
    Thanks,
    Lian

    Hi Lian,
    You can monitor these metrics in real time using Accelatis.
    The tool can provide load generation or profile calc scripts and as it does,
    you can watch the effect your activity has the values of these metrics
    being plotted on a graph that resembles an EKG.
    This information is stored for play back later, and for reporting and analysis.
    In addition to standard metrics, Accelatis can also track disk usage,
    cache utilization and performance, and many other elements that
    affect performance.
    Regards,
    Robb Salzmann

  • How to load data into Planning/Essbase from multiple data column

    Dear All,
    I have a interface file which contains multiple data column as follows.
    Year,Department,Account,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec
    FY10,Department1,Account1,1,2,3,4,5,6,7,8,9,10,11,12
    FY10,Department2,Account1,1,2,3,4,5,6,7,8,9,10,11,12
    FY10,Department3,Account1,1,2,3,4,5,6,7,8,9,10,11,12
    I created a data rule to load these interface file.
    I want to use ODI to upload this interface. I try to specify the rule name in ODI and run the interface.
    But, it came out following errors.
    2010-02-22 11:40:25,609 DEBUG [DwgCmdExecutionThread]: Error occured in sending record chunk...Cannot end dataload. Analytic Server Error(1003014): Unknown Member [FY09,032003,910201,99,0,0,0,0,0,0,0,0,0,0,0,0] in Data Load, [1] Records Completed
    Any idea to fix the column, I sure the member name is correct as I can load data from data load rule correctly.
    Thanks

    Dear John,
    I updated the data load rule delimter to "," and found different error message as follows.
    'A910201.99','HSP_InputValue','HKDepart','No_WT','032003','NO_Lease','FY09','Actual','Final','Local','0','0','0','0','0','0','0','0','0','0','0','0','Cannot end dataload. Analytic Server Error(1003014): Unknown Member [0] in Data Load, [1] Records Completed'
    It seems that the data load rule can recognize the some member except the figures of Jan to Dec..
    thanks for your help

  • Design Question:  huge column is slowing app, what's best way to speedup?

    Hello All,
    I have a DPL application that is getting a bit heavy due to one column and needs to be tweaked and wanted to ask people's advice on the best practice to speeding it up. I have an entity that is tracking versions of statistical data.
    @Entity
    public class ResourceVersion {
         @PrimaryKey(sequence = "resource_sequence")
         @XmlAttribute
         private Long id; //internal primary key
         private byte[] data; //the field slowing everything down.
         @SecondaryKey(relate=Relationship.MANY_TO_ONE)
         private String resourceId; //externally visible key.
         private String name; //field that's actually needed.
         //2 more fields that aren't relevant to example.
         //some getters and setters.
    The byte[] data field can be several megabytes and is making the application slow as my DAO layer is processing a few megabytes of data and then discarding them for the web layer.
    To abstract the problem:
    * I have a CRUD application with a single entity.
    * My page that lists the entities is getting slow.
    * I have a bean with 6 fields.
    * I need 3 of the fields for the page that's listing the entities. The other 3 are only used by a tiny subset of the application.
    I need to stop retrieving the large data field for pages that don't need it. I need a subset of my bean.
    The way I see it, I have 2 options:
    1. Figure out a way using the DPL to retrieve a subset of the fields. Is it even possible?
    2. Break the bean into 2, one containing the fields needed for listing and one bean that just stores the data.
    My assumption is that I have to break the byte[] field into it's own entity. I thought I'd just quickly post to make sure there's not an easier option.
    Thanks,
    Steven

    Hi Steven,
    You're correct that BDB does not support partial record read/write, and therefore the best solution is to break the large blob out into a separate entity (or even store it in a file).
    See:
    http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#3
    Partial data is not supported at all for the DPL, but as noted in the FAQ the limited support for it in the base API provides very little performance advantage, if any at all.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Essbase Report question...

    Ok, I have something that I think should be simple to do, but am driving myself nuts not being able to figure it out. How do you write a report script so that the first line in your output file has the dimension names as the column headers? I can get the report to dump the data in the format that I want, but I can't get it to label the dimensions defined in my <ROW, it only labels those defined in the <COL. If I could get it to label all columns, then I could use the "Record containing data load field names" option in the load rule to load the exported data back, but since it only labels some columns, I can not. Basically, I'm getting... WK1 WK2 WK3 ...ABC 2002 XYZ "Inv Qty" 10 20 30 ...I want...Product Year Customer Measure WK1 WK2 WK3... ABC 2002 XYZ "Inv Qty" 10 20 30 ...What is the syntax to get it to label the dimensions that make up my row? Thanks in advance for the help!

    What is the syntax of your report script?

  • Essbase studio question

    Hi, I am using Essbase studio 11.1.2,
    I am finding that when i make changes to a hierarchy I can refresh the cube schema and my hierarchy changes are brought into the cube schema.
    When i refresh the essbase model the hierarchy changes are not brought into the essbase model, and it is necessary to regenerate the model.
    is this a known limitation of the studio or am i doing something wrong with the tool.
    The change i made to the hierarchy was to add attributes.

    From the new features guide found at
    http://download.oracle.com/docs/cd/E17236_01/epm.1112/est_new_features/est_new_features.html
    , recreating or rebuilding an Essbase model is not required when you perform these operations:
    Change the binding, filter, sort order, or alias set bindings of a dimension element
    Change the binding, range, or alias set bindings of a derived text measure
    Change the value binding or ID binding of a text list
    Change an overridden data load binding in a cube schema
    Note that recreating or rebuilding an Essbase model is required when you perform the following operations:
    Reorder, add, or remove members in a hierarchy or measure hierarchy
    Add or remove hierarchies from a cube schema
    Add or remove any loose measures in a cube schema
    Change the measure hierarchy in a cube schema
    Override the default data load bindings in a cube schema
    So it sounds like it is working as designed, you need to rebuild the Essbase model

Maybe you are looking for

  • Multiple purchase orders for one sales order: ME59

    Hi, I have issue with converting sales order into a purchase order. A standard third party sales order (type TAS) with multiple line items is being converted to a single purchase order. In case of a free of charge sales order (type FD) with multiple

  • HP Deskjet F4480 won't scan

    When I push scan, the yellow light blinks, then stops, no scan. I used to access Start Scan under Devices & Printers, but now that option has also ....just disappeared. HELP!!!

  • Creating vendor as a storage location

    Hi all, My client want to know in any time what is the stock lying with his vendor by provding access to his system (as a user) The vendor has to update his stock in a storage location directly ,I know vendor stock will become current asset to my cli

  • PKGBUILD "Project: Starfighter". Is all correct?

    I have created a PKBUILD for "Project: Starfighter" which btw is my first PKGBUILD. I would appreciate comments on this PKGBUILD before I post it on AUR. To make it build cleaner I had to change some paths in the makefile. I am not sure if I should n

  • How to Shift ToolBar(Search Box) From Left To Right

    Hi All,         I am trying to shift toolbar of Portal i.e Search Box from left to right.But it is not happening.How I do this?I do not want to upload PAR File also. Please let me know. Thanks, Abhiram