Aggregation in OBIEE

I have scenario, where Time dimension has levels 'Year' ---> 'Quarter' ----> 'Month' and fact has a measure 'Begin balance'. This is a balance at the starting of Month. When we look at this measure at 'Quarter' level it should values corresponding to first Month in the Quarter. Can you please let me know which aggregation function need to be used in this case? Will 'First' work fine.

Create a column(QuarterlyBeginBalance) in RPD which gives you only the first month's balance as that particular quarter's balance.
As it was mentioned you have a hierarchy down to month, i presume you have fact table grain at month level.
Thus, beginbalance column for every row is assumed to be the beginning balance for that month.
Now withing the RPD, create a column CASE WHEN Quarter IN (1,2,3,4) AND Month IN (1,4,7,10) THEN Amount ELSE 0 END -- same as kishore's formula but in RPD.
Also is the "First" function your native database function. I don't see it in OBIEE 101341.
-bifacts
http://www.obinotes.com

Similar Messages

  • Based on Dimension Aggregation method - OBIEE 11G

    Hi,
    I have a fact table with one measure value and 3 dimensions (Time, Issue, status). Fact table is loaded monthly data for different issues and with status and a corresponding value.
    I want my report to show always issues that is latest of that period and then status of the same.
    Lets say:
    Fact table has data like this:
    31/Jan/2011 - Issue1 - Open - 10
    28/Feb/2011 - Issue1 - WIP - 20
    31/Mar/2011 - Issue1 - WIP - 30
    30/Apr/2011 - Issue1 -WIP - 40
    31/May/2011 - Issue1 -Closed - 50
    31/May/2011 - Issue2 -Open - 60
    31/May/2011 - Issue3 - Open - 20
    Now I want to see the report by status and value and it should show me the latest of that time period.
    For Quarter1 :
    WIP - 30
    For Quarter2:
    Open - 80
    Closed - 50
    If I take Year then it should show me :
    2011 - Open - 80
    2011 - Closed - 50
    I tried using aggregation "Based on dimension" option in BMM layer and put Others - Sum() and LAST against time dimension.
    But in that case it is giving me the results by status what is latest for the time period. So it is ding sum by other dimension first and then applying LAST method.
    In that case result of Quarter -1 will be :
    Open - 10
    WIP - 30
    If I can change the order of aggregation method like first apply LAST method and then do sum then I think desired result will come. But I am not able to change the order.
    Is there any other of doing this solution?
    Any help is highly appreciated.
    Thanks

    Hi,
    There will be multiple records for an entire month as there are many issues. For a particular Issue it will be one with one particular status for a particular month.
    The join is based on the date colum of fact table and the dimension table.
    Regards,
    SS

  • MIN aggregation in OBIEE

    CONTRACT_NUM
    INVOICE DATE
    INVOICE AMOUNT
    001
    3/6/12
    504.75
    4/22/13
    504.75
    022
    6/19/13
    571.98
    7/1/13
    571.98
    013
    8/5/13
    571.98
    I am having an issue with a report similar to the tabular one above. I am trying to get the report to look like the one below:
    CONTRACT_NUM
    INVOICE DATE
    INVOICE AMOUNT
    001
    3/6/12
    504.75
    022
    6/19/13
    571.98
    013
    8/5/13
    571.98
    We use OBIEE 10g and in the Invoice date (fx) I tried putting in MIN( INVOICE DATE BY CONTRACT_NUM)  and it gives the correct Invoice dates, but it doubles the invoice amount for Contract #s 001 and 022. Contract 001 was cancelled and was not reissued to the vendor. Contract 022 was cancelled on 7/1/13 and reissued under new terms to the vendor on 8/5/13. To cancel a contract we write a new invoice that counters the original invoice hence the two invoice dates for contracts 001 and 022.
    The users are requiring a report that ONLY shows the original invoice date and amount.
    Any help would be greatly appreciated,
    Thank you.

    Can you send bi generated query for my suggestion?
    It would be nice follow-up on same post instead of opening new one... that would be like YOU are ignoring all who ever responded to that post.. and that would result ignoring YOU to respond and your post with no suggestions
    have fun

  • Functions & Aggregations in OBIEE

    Report - We have detail level and summary level
    In detail level, we use functions to populate data Ex .. Function_name(Product_id, Date, Queue) as my input parameter. this function runs for every product_id
    At summary levelw
    1) how do we use the same function in the summary level which aggregarate on column Region or Director and not on product level
    2) on the same column where my function is implemented at detail level, how can i aggregate without using any function ??
    Please provide inputs or work arounds
    Thanks

    OBIEE doesn't have the CDF functionality just yet but it would be a welcome change for sure in future. Evaluate and connections to other apps seems to be the only way as of right now. If you do find a way to expand the list of functions though, I'd be all ears.

  • Dynamically Aggregations Selection during runtime in OBIEE 11g

    Can anyone help me how to choose aggregations dynamically in OBIEE 11g while runtime.
    Say i have SUM and AVG as two aggregations, and depending on any one of selection my column total should vary.
    I acheived this using Variable prompt, but i want it as a drop down and just by toggling between the two my report should change with out using go / apply button.
    I tried and came to conclusion that these are static text and doesn't have any database value it can't be showed as dropdown.
    Thanks,
    Swathi

    shud be simple...Simply Create 2 Pivot Views for same Criteria in Single report and perform SUM and AVG for each. Now use View Selector to toggle between two Pivot to get SUM and AVG reports.

  • Sum Aggregation Error in Physical & BMM Layer in OBIEE 11g with Essbase 11

    Hi everyone,
    I'm using OBIEE 11g with Essbase 11 as the data source. I'm using Sample Basic database from the Essbase as my data source. If I'm using the hierarchy for the measures (so I don't flatten the measures), and when I changed the aggregation in both physical and BMM layer from Aggregate_External to Sum, I can't create a report at all from the Answers.
    Does anyone encounter the same thing? Any ideas/solution about this? Please help.
    Thanks a lot!

    Hi Deepak,
    When I picked the "Basic - measure" alone, I got this error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Unknown Member Basic - measure used in query (HY000)
    SQL Issued: SELECT 0 s_0, "Sample Basic"."Basic"."Basic - measure" s_1 FROM "Sample Basic".
    When I picked the "Gen1,Measures" alone from the measure dimension, I got this error:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 46008] Internal error: File server\Query\Optimizer\ServiceInterfaceMgr\SIMDB\Src\SQOIMDXGeneratorGeneric.cpp, line 2610. (HY000)
    SQL Issued: SELECT 0 s_0, "Sample Basic"."Measures"."Gen1,Measures" s_1, SORTKEY("Sample Basic"."Measures"."Gen1,Measures") s_2 FROM "Sample Basic"
    But when I queried the dimensions one by one (only single dimension each), no error was shown.
    This only happens if I use Sum in the physical and BMM layer. If I use External_Aggregation, these errors do not happen. And if I flatten the measures, these errors also do not happen.

  • OBIEE Error[14041] Nested Aggregation measure are currently not supported

    Hi,
    Please provide workaround for the OBIEE Error[14041] Error in measure definiion. Nested aggregation measures are currently not supported.
    I have two logical columns - Current YTD Invoice Quantity and Prior YTD Invoice Price. I want to create a column which is -
    (Current YTD Invoice Quantity * Prior YTD Invoice Price)/ sum(Current YTD Invoice Quantity * Prior YTD Invoice Price)
    sum in the denominator should be the sum of all rows returned by the report. So level based measures cannot be used as there are multiple report and dimension used may vary.
    Columns Current YTD Invoice Quantity and Prior YTD Invoice Price are already aggregated to Sum.
    If I try to do the sum on the physical column while creating logical column, the query is running for 6-7 hours and not giving any result. Both the columns belong to different alias table in physical layer.
    Please let me know if you guys know any work around for this issue. Also let me know whether this type of nested aggregation are supported in 11g or not?
    Thanks.

    Hi Adil,
    Yes, I did create a hierarchy for the time dimesion.
    No, I was not able to specify the aggregate rule since the source of the logical column is another logical column
    But the time hierarchy works for rows that is not based upon the SUM aggregate at the answers level.
    Say I have 3 columns: Income,Expense, Bottom Line (formula : Income- Expense)
    Income is created based upon a case statement in the Logical Column and
    Expense is also created based upon a case statement in the Logical Column.
    Income 1000 Rupees
    Expense -300 Rupees
    Bottom Line 700 Rupees (where the Bottom line is a row based upon the Answers aggregate formula
    which is SUM(Income+Expense))
    When I add a column year, I get the following output
    Income 2009 500 Rs
    Income 2008 500 Rs
    Expense 2009 -150 Rs
    Expense 2008 -150 Rs
    Bottom Line 2009 700 Rs
    Bottom Line 2008 700 Rs
    The Bottom Line doesn't spread across the Year
    Hope this helps you to understand what my problems is.
    Thank you!

  • OBIEE BI Answers: Wrong Aggregation Measures on top level of hierarchy

    Hi to all,
    I have following problem. I hope to be clear in my English because it's a bit complicated to explain.
    I have following fact table:
    Drug Id Ordered Quantity
    1 9
    2 4
    1 3
    2 2
    and following Drug Table:
    Drug Brand Id Brand Description Drug Active Ingredient Id Drug Active Ingredient Description
    1 Aulin 1 Nimesulide
    2 Asprina 2 Acetilsalicilico
    In AWM i've defined a Drug Dimension based on following hierarchy: Drug Active Ingredient (parent) - Drug Brand Description (leaf) mapped as:
    Drug Active Ingredient = Drug Active Ingredient Id of my Drug Table (LONG DESCRIPTION Attribute=Drug Active Ingredient Description)
    Drug Brand Description = Drug Brand Id of my Drug Table (LONG DESCRIPTION Attribute = Drug Brand Description)
    Indeed in my cube I've mapped leaf level Drug Brand Description = Drug Id of my fact table. In AWM Drug Dimension is mapped as Sum Aggregation Operator
    If I select on Answers Drug Active Ingredient (parent of my hierarchy) and Ordered Quantity I see following result
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 24
    Nimesulide 12
    indeed of correct values
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 12
    Nimesulide 6
    EXACTLY the double!!!!!!! But if I drill down Drug Active Ingredient Description Acetilsalicilico I see correctly:
    Drug Active Ingredient Description Drug Brand Description Ordered Quantity
    Acetilsalicilico
    - Aspirina 12
    Total 12
    Wrong Aggregation is only on top level of hierarchy. Aggregation on lower level of hierarchy is correct. Maybe Answers sum also Total Row????? Why?????
    I'm frustrated. I beg your help, please!!!!!!!!
    Giancarlo

    Hi,
    in NQSConfig.ini I can't find Cache Section. I post all file. Tell me what I must change. I know your patient is quite at limit!!!!!!! But I'm a new user of OBIEE.
    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    #Star     =     samplesales.rpd, DEFAULT;
    Star = Step3.rpd, DEFAULT;
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     YES;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "Italian";
    SORT_ORDER_LOCALE     =     "Italian";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = NO;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";

  • Measue aggregation not used when OBIEE hits MV

    Hi,
    We are facing 1 issue in MV implementation in OBIEE i.e. aggregation is not used for measure when OBIEE hits the MV and it's using the aggregation when OBIEEhits the base table.Ex. A is a column with aggregation sum when we pull measure A for ttime dimension when it's hitting MV it's not using sum in obiee query and when it's hitting base table it's uisng sum.Can anyone tell me how can we overcome this issue.
    Thanks,
    Amrit

    Hi Rajagopal,
    I am not sure if that is exactly accurate as I have other tables where the LTS includes a second or even third table but both tables are not included in the query.
    Example 1 - Table 1 joined to Table 2. Table 2 is setup as a LTS on Table 1. When a report is created off of Table 2, Table 1 is not included in the query.
    Example 2 - Table 1 joined to Table 2. Table 1 is setup as a LTS on Table 2. When a report is created off of Table 2, Table 1 is not included in the query.
    I have also verified that there is no other LTS on the Customer table but the issue still exists.
    Any other thoughts you can provide would be appreciated.
    Thanks

  • OBIEE 11g - Aggregation With filter

    Hi all ,
    I have a reg where the condition to be applied in ONE COLUMN of a report is as follows
    select count(*)
    from (SELECT SUM(quantity),SUM(quantity_received) qty_rec ,SUM(quantity)-SUM(quantity_received) qty ,po_header_id
          FROM F_ERP_PO
          group by po_header_id)
    where qty > 0
      and qty_rec <> 0     
    .. I  have multiple columns in the reports with varying conditions
    I tried out in different ways but could not bring the answer since this condition is just ofr one column and the next column has other conditions.
    Kindly help me in simplifying this report . I am stuck with it for long
    Thanks in advance,
    Regards,
    Niv d

    Hi Chris,
    Thanks for ur response.
    I tried usinf FILTER() function in different ways. But I had a comparison like
    FILTER("F1 Facts"."Total Revenue" USING (sum("PO"."Quantity") = sum("PO"."Quantity_Recieved") )) .
    Here the error is like Cannot use aggregation with the USING clause .
    I even tried with, Filter based on another request feature , it throws an error saying 'cannot use multiple select clause '
    Any suggestions would be very helpful.
    Regards,
    Niv D

  • Siebel to OBIEE migration - Nested Aggregation Issue

    Hi
    I recently migrated from Siebel to OBIEE, now after migrating i found that i am having consistency errors at RPD level
    Error_:
    [14041] Error in measure definition for column X. Nested aggregate measure definitions are currently not supported.
    Formula for Column X:
    X= Sum(A*B)
    Where A and B are two columns from 2 different logical tables, and both A and B have aggregates like Max or Min or Avg ...
    I need someone to guide me how to fix the error and get the right data, because sometimes when we fix Error we still don't get correct values.
    regards

    Hy,
    Yes, you can't have a sum from a sum as :
    sum(sum(a)*sum(b))I suppose that you want :
    sum(a*b)You need to create a Calculation Measure Using Physical Columns to have
    this calculation a*b
    Check here how to do :
    http://www.oracle.com/technology/obe/obe_bi/bi_ee_1013/bi_admin/biadmin.html#t5s5
    And then apply the sum aggregate definition on this column
    Success
    Nico

  • OBIEE Aggregation Error

    Hi All,
    I am getting the following error when i try to see my results in pivot table view.
    "Aggregation necessary but was not expected. This can occur when a level based aggregate is missing its associated column. For instance, 'Country Dollars' is somtime may require the 'Country' column to be present. To better diagnose this problem look at the table view of this data and look for duplicate records"
    Error Codes: M7DK9XLX
    I have set the aggregation level for the column in rpd level as SUM. Can anyone provide the solution for this?

    All, I found the solution for this issue. I have set the Aggregation's end level (Fact Table->Logical Table Source ->Content->Select the end level of logical content to the dimension) to fact. Now works fine, this is happening when we have more number of drill down levels.

  • OBIEE 11g Admin Tool aggregating measure columns

    I have a few meausre columns that are in my logical table. I would like to sum the measure columns but I can't figure out how.When I create another logical table only the physical columns would show up but no measure/calculated columns. What do I need to do?
    Thank you in advance

    start>>all programs>>OBI>>Administrator
    you will get the Admin tool
    assign points and make question as answered

  • Setting aggregation content for logical level in 11g

    Hi Guys,
    When working on with horizontal and vertical federation in OBIEE 11g with multiple data sources here in my case it is essbase and RDBMS.
    1) pulled the columns and dragged into the concerened table.
    2) The related heirarchies have been defined.
    3) when trying to go to one of the LTS and trying to set the logical level aggregation im not able to see the levels columns corresponding nor im getting the get levels option to get them. where am i going wrong?
    when im trying to join a fact by pulling it on to the fact...i can see the levels in content tab,but when i try to define levels and check it its giving me error "There are no levels matching the BI algorithm"
    Any answers wud be appreciated.
    TIA,
    KK
    Edited by: Kranthi.K on Sep 5, 2011 2:52 AM

    It is autocreated,i dint customize it.....Im dropping the RDBMS table onto the Essbase cube dimension table and im not getting the RDBMS content levels that should be defined in the LTS of the table,and the RDBMS table has an level based hierarchy but still no sucess.
    Any more ideas
    UPDATED POST
    Deepak,it was not helpful as i have gone through tht document before....Im trying it in all scenerios to figure out where actually it is going wrong.
    If i dont find the path,i will let you kne what im trying to do so you can help me out.
    UPDATED POST-2
    Any more pointers from the experts.
    Edited by: Kranthi.K on Sep 6, 2011 7:01 AM

  • Newbie questions regarding pulling data from hyperion planning cube w/OBIEE

    Hello there,
    we've recently implemented Essbase and are currently pumping it full of revenue/expense data from out source systems to calculate NOI. This data is stored in a staging table at the detail level where it is sourced into Hyperion and aggregated. We also have OBIEE 10g (plan to upgrade to 11g later this year) and we would like to connect to and report out of Essase. Our ultimate goal is to be able to report on the NOI numbers in the planning cube but have the ability to drill down to the detail level which is not stored in Essbase. We've heard it is possible, albeit not native for OBIEE 10g to do this. We've also heard that it is not best practice to use our transactional cube for this type of reporting but to actually create a second "reporting cube".
    What are best practices for getting this NOI data out of Hyperion and merging it with our relational detail reporting? Can we somehow export the data from the cube and store it in a relational database? Should we clone the cube (if even possible) and configure both it and our relational source in the BI repository and setup all drill-throughs there?
    Any info is GREATLY appreciated. Thank you.
    Edited by: cisGuy on Sep 20, 2010 5:31 PM

    i have found information on how to use ODI to extract data for the cube. What I'm really trying to find out though is best practices for reporting off summary level data in Essbase with the ability to drill-through to the detail.
    We've heard reporting off the same cube that users are writing back and transacting on is bad. Do we need to make a "reporting cube" and then bring that in OBIEE and merge it with a relational source or is it better to extract the data from Essbase into flat files and join it to detail tables in our relational source?

Maybe you are looking for

  • IWeb quits 'unexpectedly'...how do I get it to work/

    I have a website on iWeb, and I have all of a sudden had a problem opening it. I click on the icon on my dock, and it starts to bounce, but after 15-20 seconds it gives me a message saying 'the application iWeb quit unexpectedly'. I have tried relaun

  • Oracle combination of physical and logical ..

    hi to all can any smart man help me to understand how operating system block and datafiles r physical while tablespace, segements, extents are logical my mail adreess is [email protected]

  • Pausing photosync

    Is there ant way to temporarily stop an iPhone from syncing its pictures to photo stream. I have the problem when traveling abroad that I carry one phone as my personal hotspot but it is separate from my iPhone. When I connect my iPhone to it, becaus

  • Image Map of JPG not Accessing or Importing Media

    I am trying to link WMV file to image maps on a jpg file in a CHM created by RoboHelp 9. It looks like it is working, and if I preview the page, the links work. But when I generate the finished CHM file, the media is not accessable. I went back to th

  • Cannot find opmn.xml

    Hi I installed Development suite (Forms Builder [32 Bit] Version 10.1.2.0.2 (Production) ) and DB (10GR2) in same machine (XP) Issue is I can't find the opmn.xml on both paths. In Dev suite path cannot find the opmn folder as well. But in DB path opm