Aggregation & Analysis

I'm attempting to use Discoverer to create a rolling 12-month attrition report. It works fine, up to the point of trying to create a average headcount for each month in the current 12-month period over the last 12 months' each. The problem I'm encountering involves the use of the MIN() function in selecting active employees in each month, mostly due to a data-cleansing issue, which I'd hoped to bypass. Because some invididuals have two "data conversion" records - i.e., they were converted to the new database and an additional, subsequent record re-used what should have been a unique action reason - I need to test their MIN(position start date) so as to then use their actual start against the first record, whereas I can go on to use their position start to capture their FTE for any subsequent active records.
So I created a calculation, Min Start, to hold the earliest start date for each employee:
MIN(Position.Start.Date) OVER (PARTITION BY Employee.Number ORDER BY Position.Start.Date RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
Then I test the status of each position for each month in the report and return the FTE where it tests as active through month-end.
CASE WHEN Position.Action.Reason = 'CNV' THEN
CASE WHEN Position.Start.Date = Min Start THEN
CASE WHEN Hire.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END
WHEN Position.Start.Date <= Month.END
CASE WHEN Hire.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END END
WHEN Position.Start.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END
However, I can't SUM this because nesting isn't permitted and I can't average the sum because Aggregation of Analytic functions is not allowed either. I need a different approach. Data is always going to be dirty, so coding to account for such problems means I can perform my reporting requirements without interruption for clean-ups. (I know that keeping the data clean is best and highlighting such problems brings the attention of managers and staff to rectifying and avoiding such problems, but I still need to get the results out.)
I guess I need a new approach, but I'm at the end of every google search and textbook I can locate. Any suggestions would be very welcome.
Edited by: 814208 on 21/11/2010 19:14

Updated: For the analysis authorization, for these characteristics, in addition to the specific values, there is an entry for ":" and this did not resolve the issue.
Authorization Check  
  Detail Check for InfoProvider ZBL_13IT  
  Preprocessing:  
Selection Checked for Consistency, Preprocessed and Supplemented As Needed
Subselection (Technical SUBNR) 1
Check Node Definitions and Value Authorizations...
Node- and Value Authorizations Are OK
End of Preprocessing
  Main Check:  
  Subselection (Technical SUBNR) 1  
Supplementation of Selection for Aggregated Characteristics
  Check Added for Aggregation Authorization:     0COMP_CODE  
  Check Added for Aggregation Authorization:     0PLANT  
Following Set Is Checked  Comparison with Following Authorized Set  Result  Restmenge 
Characteristic  Contents 
0PLANT
0SALESORG
0TCAACTVT
0COMP_CODE
SQL Format:
COMP_CODE = ':'
AND PLANT = ':'
AND SALES
ORG = '0403'
AND TCAACTVT = '03'
Characteristic  Contents 
0PLANT  I EQ IC01
I EQ IC02
I EQ IC03
I EQ IE02
I EQ IE04
I EQ IZ01
I EQ MC01
I EQ ME01
I EQ :
0SALESORG  I EQ 0301
I EQ 0341
I EQ :
0TCAACTVT  I EQ 03
I EQ :
0COMP_CODE  I EQ 0327
I EQ 0341
I EQ :
Not Authorized   
All Authorizations Tested
  Message EYE007: You Do Not Have Sufficient Authorization  
  No Sufficient Authorization for This Subselection (SUBNR)  
Following CHANMIDs Are Affected:
142 ( 0COMP_CODE )
189 ( 0PLANT )
192 ( 0SALESORG )
  Authorization Check Complete

Similar Messages

  • Error in .oci.GetQuery(conn, statement, ...) :    ORA-29400: data cartridge error ORA-24323: ????? ORA-06512: at "RQSYS.RQTABLEEVALIMPL", line 24 ORA-06512: at line 4

    Hi,everyone,
                I had  installed  R Enterprise in my Oracle 11.2.0.1 base on win7,using the R 2.13.2, ORE 1.1,  I can using the part function: like
    library(ORE)
    options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient.exe', show.error.locations=TRUE)
    > ore.connect(user = "RQUSER",password = "RQUSERpsw",conn_string = "", all = TRUE)
    > ore.is.connected()
    [1] TRUE
    > ore.ls()
    [1] "IRIS_TABLE"
    > demo(package = "ORE")
    Demos in package 'ORE':
    aggregate               Aggregation
    analysis                Basic analysis & data processing operations
    basic                   Basic connectivity to database
    binning                 Binning logic
    columnfns               Column functions
    cor                     Correlation matrix
    crosstab                Frequency cross tabulations
    derived                 Handling of derived columns
    distributions           Distribution, density, and quantile functions
    do_eval                 Embedded R processing
    freqanalysis            Frequency cross tabulations
    graphics                Demonstrates visual analysis
    group_apply             Embedded R processing by group
    hypothesis              Hyphothesis testing functions
    matrix                  Matrix related operations
    nulls                   Handling of NULL in SQL vs. NA in R
    push_pull               RDBMS <-> R data transfer
    rank                    Attributed-based ranking of observations
    reg                     Ordinary least squares linear regression
    row_apply               Embedded R processing by row chunks
    sql_like                Mapping of R to SQL commands
    stepwise                Stepwise OLS linear regression
    summary                 Summary functionality
    table_apply             Embedded R processing of entire table
    > demo("aggregate",package = "ORE")
      demo(aggregate)
      ---- ~~~~~~~~~
    Type  <Return> to start : Return
    > #
    > #     O R A C L E  R  E N T E R P R I S E  S A M P L E   L I B R A R Y
    > #
    > #     Name: aggregate.R
    > #     Description: Demonstrates aggregations
    > #     See also summary.R
    > #
    > #
    > #
    >
    > ## Set page width
    > options(width = 80)
    > # List all accessible tables and views in the Oracle database
    > ore.ls()
    [1] "IRIS_TABLE"
    > # Create a new table called IRIS_TABLE in the Oracle database
    > # using the built-in iris data.frame
    >
    > # First remove previously created IRIS_TABLE objects from the
    > # global environment and the database
    > if (exists("IRIS_TABLE", globalenv(), inherits = FALSE))
    +     rm("IRIS_TABLE", envir = globalenv())
    > ore.drop(table = "IRIS_TABLE")
    > # Create the table
    > ore.create(iris, table = "IRIS_TABLE")
    > # Show the updated list of accessible table and views
    > ore.ls()
    [1] "IRIS_TABLE"
    > # Display the class of IRIS_TABLE and where it can be found in
    > # the search path
    > class(IRIS_TABLE)
    [1] "ore.frame"
    attr(,"package")
    [1] "OREbase"
    > search()
    [1] ".GlobalEnv"          "ore:RQUSER"          "ESSR"              
    [4] "package:ORE"         "package:ORExml"      "package:OREeda"    
    [7] "package:OREgraphics" "package:OREstats"    "package:MASS"      
    [10] "package:OREbase"     "package:ROracle"     "package:DBI"       
    [13] "package:stats"       "package:graphics"    "package:grDevices" 
    [16] "package:utils"       "package:datasets"    "package:methods"   
    [19] "Autoloads"           "package:base"      
    > find("IRIS_TABLE")
    [1] "ore:RQUSER"
    > # Select count(Petal.Length) group by species
    > x = aggregate(IRIS_TABLE$Petal.Length,
    +               by = list(species = IRIS_TABLE$Species),
    +               FUN = length)
    > class(x)
    [1] "ore.frame"
    attr(,"package")
    [1] "OREbase"
    > x
         species  x
    1     setosa 50
    2 versicolor 50
    3  virginica 50
    > # Repeat FUN = summary, mean, min, max, sd, median, IQR
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = summary)
         species Min. 1st Qu. Median  Mean 3rd Qu. Max. NA's
    1     setosa  1.0     1.4   1.50 1.462   1.575  1.9    0
    2 versicolor  3.0     4.0   4.35 4.260   4.600  5.1    0
    3  virginica  4.5     5.1   5.55 5.552   5.875  6.9    0
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = mean)
         species     x
    1     setosa 1.462
    2 versicolor 4.260
    3  virginica 5.552
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = min)
         species   x
    1     setosa 1.0
    2 versicolor 3.0
    3  virginica 4.5
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = max)
         species   x
    1     setosa 1.9
    2 versicolor 5.1
    3  virginica 6.9
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = sd)
         species         x
    1     setosa 0.1736640
    2 versicolor 0.4699110
    3  virginica 0.5518947
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = median)
         species    x
    1     setosa 1.50
    2 versicolor 4.35
    3  virginica 5.55
    > aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
    +           FUN = IQR)
         species     x
    1     setosa 0.175
    2 versicolor 0.600
    3  virginica 0.775
    > # More than one grouping column
    > x = aggregate(IRIS_TABLE$Petal.Length,
    +               by = list(species = IRIS_TABLE$Species,
    +                         width = IRIS_TABLE$Petal.Width),
    +               FUN = length)
    > x
          species width  x
    1      setosa   0.1  5
    2      setosa   0.2 29
    3      setosa   0.3  7
    4      setosa   0.4  7
    5      setosa   0.5  1
    6      setosa   0.6  1
    7  versicolor   1.0  7
    8  versicolor   1.1  3
    9  versicolor   1.2  5
    10 versicolor   1.3 13
    11 versicolor   1.4  7
    12  virginica   1.4  1
    13 versicolor   1.5 10
    14  virginica   1.5  2
    15 versicolor   1.6  3
    16  virginica   1.6  1
    17 versicolor   1.7  1
    18  virginica   1.7  1
    19 versicolor   1.8  1
    20  virginica   1.8 11
    21  virginica   1.9  5
    22  virginica   2.0  6
    23  virginica   2.1  6
    24  virginica   2.2  3
    25  virginica   2.3  8
    26  virginica   2.4  3
    27  virginica   2.5  3
    > # Sort the result by ascending value of count
    > ore.sort(data = x, by = "x")
          species width  x
    1   virginica   1.4  1
    2   virginica   1.7  1
    3  versicolor   1.7  1
    4   virginica   1.6  1
    5      setosa   0.5  1
    6      setosa   0.6  1
    7  versicolor   1.8  1
    8   virginica   1.5  2
    9  versicolor   1.1  3
    10  virginica   2.4  3
    11  virginica   2.5  3
    12  virginica   2.2  3
    13 versicolor   1.6  3
    14     setosa   0.1  5
    15  virginica   1.9  5
    16 versicolor   1.2  5
    17  virginica   2.0  6
    18  virginica   2.1  6
    19     setosa   0.3  7
    20 versicolor   1.4  7
    21     setosa   0.4  7
    22 versicolor   1.0  7
    23  virginica   2.3  8
    24 versicolor   1.5 10
    25  virginica   1.8 11
    26 versicolor   1.3 13
    27     setosa   0.2 29
    > # by descending value
    > ore.sort(data = x, by = "x", reverse = TRUE)
          species width  x
    1      setosa   0.2 29
    2  versicolor   1.3 13
    3   virginica   1.8 11
    4  versicolor   1.5 10
    5   virginica   2.3  8
    6      setosa   0.4  7
    7      setosa   0.3  7
    8  versicolor   1.0  7
    9  versicolor   1.4  7
    10  virginica   2.1  6
    11  virginica   2.0  6
    12  virginica   1.9  5
    13 versicolor   1.2  5
    14     setosa   0.1  5
    15 versicolor   1.6  3
    16 versicolor   1.1  3
    17  virginica   2.4  3
    18  virginica   2.5  3
    19  virginica   2.2  3
    20  virginica   1.5  2
    21  virginica   1.6  1
    22  virginica   1.4  1
    23     setosa   0.6  1
    24     setosa   0.5  1
    25 versicolor   1.8  1
    26  virginica   1.7  1
    27 versicolor   1.7  1
    > # Preserve just 1 row for duplicate x's
    > ore.sort(data = x, by = "x", unique.keys = TRUE)
          species width  x
    1      setosa   0.5  1
    2   virginica   1.5  2
    3  versicolor   1.1  3
    4      setosa   0.1  5
    5   virginica   2.0  6
    6      setosa   0.3  7
    7   virginica   2.3  8
    8  versicolor   1.5 10
    9   virginica   1.8 11
    10 versicolor   1.3 13
    11     setosa   0.2 29
    > ore.sort(data = x, by = "x", unique.keys = TRUE, unique.data = TRUE)
          species width  x
    1      setosa   0.5  1
    2   virginica   1.5  2
    3  versicolor   1.1  3
    4      setosa   0.1  5
    5   virginica   2.0  6
    6      setosa   0.3  7
    7   virginica   2.3  8
    8  versicolor   1.5 10
    9   virginica   1.8 11
    10 versicolor   1.3 13
    11     setosa   0.2 29
    but    when I  use the following The ore.doEval command  get the errors,
    > ore.doEval(function() { 123 })
    Error in .oci.GetQuery(conn, statement, ...) :
      ORA-29400: data cartridge error
    ORA-24323: ?????
    ORA-06512: at "RQSYS.RQEVALIMPL", line 23
    ORA-06512: at line 4
    and  I  try to run the        demo("row_apply", package="ORE")  get the  same errors:
    demo("row_apply",package = "ORE")
      demo(row_apply)
      ---- ~~~~~~~~~
    Type  <Return> to start : Return
    > #
    > #     O R A C L E  R  E N T E R P R I S E  S A M P L E   L I B R A R Y
    > #
    > #     Name: row_apply.R
    > #     Description: Execute R code on each row
    > #
    > #
    >
    > ## Set page width
    > options(width = 80)
    > # List all accessible tables and views in the Oracle database
    > ore.ls()
    [1] "IRIS_TABLE"
    > # Create a new table called IRIS_TABLE in the Oracle database
    > # using the built-in iris data.frame
    >
    > # First remove previously created IRIS_TABLE objects from the
    > # global environment and the database
    > if (exists("IRIS_TABLE", globalenv(), inherits = FALSE))
    +     rm("IRIS_TABLE", envir = globalenv())
    > ore.drop(table = "IRIS_TABLE")
    > # Create the table
    > ore.create(iris, table = "IRIS_TABLE")
    > # Show the updated list of accessible table and views
    > ore.ls()
    [1] "IRIS_TABLE"
    > # Display the class of IRIS_TABLE and where it can be found in
    > # the search path
    > class(IRIS_TABLE)
    [1] "ore.frame"
    attr(,"package")
    [1] "OREbase"
    > search()
    [1] ".GlobalEnv"          "ore:RQUSER"          "ESSR"              
    [4] "package:ORE"         "package:ORExml"      "package:OREeda"    
    [7] "package:OREgraphics" "package:OREstats"    "package:MASS"      
    [10] "package:OREbase"     "package:ROracle"     "package:DBI"       
    [13] "package:stats"       "package:graphics"    "package:grDevices" 
    [16] "package:utils"       "package:datasets"    "package:methods"   
    [19] "Autoloads"           "package:base"      
    > find("IRIS_TABLE")
    [1] "ore:RQUSER"
    > # The table should now appear in your R environment automatically
    > # since you have access to the table now
    > ore.ls()
    [1] "IRIS_TABLE"
    > # This is a database resident table with just metadata on the R side.
    > # You will see this below
    > class(IRIS_TABLE)
    [1] "ore.frame"
    attr(,"package")
    [1] "OREbase"
    > # Apply given R function to each row
    > ore.rowApply(IRIS_TABLE,
    +              function(dat) {
    +                  # Any R code goes here. Operates on one row of IRIS_TABLE at
    +                  # a time
    +                  cbind(dat, dat$Petal.Length)
    +              })
    Error in .oci.GetQuery(conn, statement, ...) :
      ORA-29400: data cartridge error
    ORA-24323: ?????
    ORA-06512: at "RQSYS.RQROWEVALIMPL", line 26
    ORA-06512: at line 4
    >
    whether my oracle's version 11.2.0.1 has no the RDBMS bug fix, and other  problems? Thanks

    Oracle R Enterprise 1.1. requires Oracle Database 11.2.0.3, 11.2.0.4. On Linux and Windows.  Oracle R Enterprise can also work with an 11.2.0.1 or 11.2.0.2 database if it is properly patched.
    Embedded R execution will not work without a patched database.  Follow this procedure to patch the database:
    1. Go to My Oracle Support:http://support.oracle.com
    2. Log in and supply your Customer Support ID (CSI).
    3. Choose the Patches & Updates tab.
    4. In the Patch Search box, type 11678127
    and click Search
    5. Select the patch for your version of Oracle Database, 11.2.0.1.
    6. Click Download to download the patch.
    7. Install the patch using OPatch. Ensure that you are using the latest version of OPatch.
    Sherry

  • Doubt about uses of OBIEE

    I have some doubts about the possible uses of OBIEE. It happens that using OBIEE sometimes users demand report of an "analytical" type, that is aggregated analysis through OBIEE’s Answers, selecting data from dimension tables and measures from fact tables. That’s the ordinary purpose of business intelligence tools!!!
    Some other times though, users demand to perform through Answers analyses of an "operating" type, that is simple extractions of some fields belonging to dimension tables, linked between each other through joins, (hence without querying fact tables): that happens because some of the tables brought in the datawarehouse are not directly linked to any fact table. In this way users want to use Answers to visualize data even for this kind of extractions (or operating reports).
    Is this a correct use of the tool or is it just a “twisted” way of using it, always leading eventually to incorrect extractions? If that’s the case, is it possible to use instead BI Publisher, extracting the dataset through the "Sql Query" mode in a visual manner? The problem of the latter solution, in my case, relies in the fact that users are not enough skilled from the technical point of view: they would prefer to use Answers for every extraction, belonging both to the first type (aggregations) and the second one (extractions), that I just described. Can you suggest a methodology to clarify this situation?

    Hi,
    I understand your point... But I think OBIEE doesn't allow having dimension "on their own", they must be joined to a fact table somehow. This way, when you do a query in answers using fields of two dimension tables a fact table should be always involved. When dimensions are conformed, several fact tables may be used, and OBIEE uses the "best" one in terms of performance. However, there are some tricks that you can do to make sure a particular fact table is used, like using the "implicit fact column" in the presentation layer.
    So back to your point, using OBIEE for "operational" reporting as you call it is a valid option in my experience, but you have to make sure that the underlaying star schema supports the logic that your end users expect when they use just dimension fields.
    Regards,

  • Neotix View

    Hi
    Could you please explain about Neotix Views (or) Neotix Generators ?
    NoetixViews works with Noetix Generator to power popular business intelligence (BI) reporting tools, such as Oracle Business Intelligence Suite Enterprise Edition, Oracle Discoverer, and other leading BI tools, to accelerate access to Oracle EBusiness Suite application data.
    Would like to know how it works exactly... do any one of you worked on Neotix Views..?? Explanation with any realtime example is really helpfull

    I have not directly worked w Noetix but it is basically a company that creates Reporting via Materialized Views off the EBS system tables. It is a popular product and tool and the views essentially are built off of a deep understanding of the EBS system, tables, flex fields etc. In general, this is OLTP based reporting similar to DBI and also Oracle Fusion OLTBI..which is also OLTP based table reports.
    The difference with Noetix and OBIA and other DW solutions is that they do not have a ETL process involved nor do they rely on a Denormalized Dimensional Model. In essence its good for transactional reports but not aggregated analysis. For example Noetix may be a good report for Invoice Details...but if you are looking for month over month AR Aging Reports, its better to have a DW solution like OBIA as this requires a lot of aggregation..even w Views it will run very slow and not suitable for OLTP systems.
    If helpful, pls mark as correct or helpful

  • Analysis Authorizations - Aggregation Level ( ':' )

    Dear BW Gurus,
    Greetings!!!
    I have a scenario of migrating the Authorizations from BW 3.5 to BI 7.0 Analysis Authorizations.
    There is a report based on InfoSet. There are about 8 authorization relevant objects among which the user is authorized for 3 Authorization relevant characteristic fields. In the previous version, It was working fine when tested.
    In BI 7.0 Analysis Authorization Concept, I have created the Authorization Object with these 3 Auth relevant fields and assigned to the Role using S_RS_AUTH and then assigned it to a user. When I test the query with that particular user ID, the result was NO SUFFICIENT AUTHORIZATION.
    When I checked the log, there it displayed the other authorization relevant fields for Aggregation level. So, my question is whether I must include the other Authorization relevant fields and restrict them for aggregation level (Value ':') and is it mandatory?
    please guide me in this regard as early as possible.
    Best Regards,
    Priya

    Hello,
    Check st01 and su53 for missing objects then assign objects accordingly in the role or in analyse authorisation .
    Thanks.
    With regards,
    Anand Kumar

  • Scope of Analysis & Aggregation

    Hi All,
    I am new to Business Objects Egde and I am struggling to do something quite simple.  I have the following tables:
    Table 1
    Sales rep      | Target
    A          |$100
    Table 2
    State     | Sales Rep     | Clinic          |Revenue
    NSW     |A          |X          |$5
    NSW     |A          |Y          |$8
    In Web Intelligence, I want a report to return the target at the state level first and then using the drill-down functionality at the sales rep and clinic level.  When using the state/sales rep scope of analysis I get the right result, but as soon as I add the clinic to the scope the result is wrong.
    Results
    Scope of analysis = State/Sales Rep
    State     |Target     |Revenue
    NSW     |$100          |$13
    Scope of analysis = State/Sales Rep/Clinic
    State     |Target     |Revenue
    NSW     |$200          |$13
    How should I define the target (in the universe or in WebI) so that I can get the right results (ie, $100) and the clinic in the scope of analysis?
    Thanks in advance.
    Regards,
    Emilie

    Hi
    BO Edge works as follows:
    since you are displaying the data of two queries in one table it builds the product of the two tables:
    State | Sales Rep | Clinic |Revenue| Target
    NSW |A |X |$5 | $100
    NSW |A |Y |$8 | $100
    If you assign only the State and the Sales Rep dimensions in your table then WebI aggregates the Target and Revenue key figures. I assume that you have assigned your key figures in the universe the sum operator.
    Table 3
    State | Sales Rep | Revenue| Target
    NSW |A |X |($5 + $8) | ($100 + $100)
    Keep in mind that first the aggregated table is build internally (Table 3) and then the drill is done.
    Regards,
    Stratos

  • Aggregation authorization within analysis authorizations

    Hello,
    The BW developers have created several queries that use aggregated data. When testing the queries with the end user ID (assigned to an end user role and analysis authorization), it is failing due to missing authorizations, particularly b/c of the following:
    Supplementation of Selection for Aggregated
    Characteristics
      Check Added for Aggregation Authorization:     0COMP_CODE  
      Check Added for Aggregation Authorization:     0PLANT  
    All Authorizations Tested
      Message EYE007: You Do Not Have Sufficient Authorization  
      No Sufficient Authorization for This Subselection (SUBNR)  
    Following CHANMIDs Are Affected:
    142 ( 0COMP_CODE )
    189 ( 0PLANT )
    192 ( 0SALESORG )
    In the BI documentation, it specifies that : (colon): allows only aggregated access to data (e.g. allows information on all sales areas only on aggregated level –not on particular countries). Is this ( specified as a value within the particular characteristics? I also tried to select the "aggregation authorization" icon within txn code RSECADMIN and this did not help.
    Any help would be GREATLY appreciated.

    Updated: For the analysis authorization, for these characteristics, in addition to the specific values, there is an entry for ":" and this did not resolve the issue.
    Authorization Check  
      Detail Check for InfoProvider ZBL_13IT  
      Preprocessing:  
    Selection Checked for Consistency, Preprocessed and Supplemented As Needed
    Subselection (Technical SUBNR) 1
    Check Node Definitions and Value Authorizations...
    Node- and Value Authorizations Are OK
    End of Preprocessing
      Main Check:  
      Subselection (Technical SUBNR) 1  
    Supplementation of Selection for Aggregated Characteristics
      Check Added for Aggregation Authorization:     0COMP_CODE  
      Check Added for Aggregation Authorization:     0PLANT  
    Following Set Is Checked  Comparison with Following Authorized Set  Result  Restmenge 
    Characteristic  Contents 
    0PLANT
    0SALESORG
    0TCAACTVT
    0COMP_CODE
    SQL Format:
    COMP_CODE = ':'
    AND PLANT = ':'
    AND SALES
    ORG = '0403'
    AND TCAACTVT = '03'
    Characteristic  Contents 
    0PLANT  I EQ IC01
    I EQ IC02
    I EQ IC03
    I EQ IE02
    I EQ IE04
    I EQ IZ01
    I EQ MC01
    I EQ ME01
    I EQ :
    0SALESORG  I EQ 0301
    I EQ 0341
    I EQ :
    0TCAACTVT  I EQ 03
    I EQ :
    0COMP_CODE  I EQ 0327
    I EQ 0341
    I EQ :
    Not Authorized   
    All Authorizations Tested
      Message EYE007: You Do Not Have Sufficient Authorization  
      No Sufficient Authorization for This Subselection (SUBNR)  
    Following CHANMIDs Are Affected:
    142 ( 0COMP_CODE )
    189 ( 0PLANT )
    192 ( 0SALESORG )
      Authorization Check Complete

  • EYE 007 Aggregated Value for Analysis Authorisations

    Hi there,
    I'm attempting to unit test a new report in our development environment via RSECADMIN. Having created the role and assigned to the test user I get the error that aggregated values for particular characteristics are empty. However I've already added these to an analysis authorisation and used this for another report where it finds the characteristics.
    I'm stumped as to why this report doesn't find the same values. I've generated the role and run a user master compare, but this still fails. Any help is appreciated.
    Thanks.

    1. Please take the InfoProvider on which you have created your query and find which characteristics are Authorization Relavant for that MultiProvider/InfoProvider.
    2. Make sure all these characteristics are added to the analysis authorizations assigned to the user: Detailed feild values for the one your report is about and aggregated value for the other one and all the relevant 0TCA* content as well
    The report should work, however in your case it seems like you are assigning the characteristics using separate analysis authorizations, in that case make sure the concerned InfoProvider is mentioned in each analysis authorization under 0TCAIPROVfor the analysis authorizations to combine.

  • WebI issue with hierarchy display and aggregation

    Trying to wrangle what looks like a defect in WebI's handling of hierarchy display and aggregation. We just completed an update cycle and are running BOBJ 4.1 SP4.
    The hierarchy is a standard FM Commitment Item hierarchy in which both the nodes and leaves are Commitment Items (i.e. it uses InfoObject nodes, not text nodes). An example of one of these nodes looks like this in BW:
    Cmmt_Item A - Node
        Cmmt_Item B - Leaf
        Cmmt_Item C - Leaf
    Let's pretend Commitment Item A has $50 posted to it, B has $20 and C has $30. Analysis for OLAP handles this by adding a virtual leaf line to distinguish postings that are on the parent node like so:
    Cmmt_Item A - Node       $100
        Cmmt_Item A - Leaf    $50
        Cmmt_Item B - Leaf    $20
        Cmmt_Item C - Leaf    $30
    So you see both the total for the node ($100) and a line for each Commitment Items with KFs posted to them. Our users like this. They can easily see the aggregation and the breakdown.
    WebI, on the other hand, will display it like this:
    Cmmt_Item A - Node       $150
        Cmmt_Item B - Leaf    $20
        Cmmt_Item C - Leaf    $30
    It doesn't create a separate line for the value of the parent node, but it does add it's value into the aggregate. Twice. Modifying the table with the 'avoid duplicate row aggregation' checkbox yields output like this:
    Cmmt_Item A - Node       $100
    Cmmt_Item A - Node        $50
        Cmmt_Item B - Leaf    $20
        Cmmt_Item C - Leaf    $30
    We're about halfway there. While the top row now shows the correct aggregation and it creates a new line to show the distinct amount on the parent node, that new line appears on the same level as the parent. It's no longer clear that there's an aggregate and a breakdown. And attempting to expand or contract a node will now crash the report with one of those 'Error 16' messages.
    Has anyone encountered this issue with hierarchies in WebI? This report was built from scratch in 4.1, so I'm not sure if this affects older versions or not. Or if it would affect any hierarchy that uses InfoObject nodes instead of text nodes.

    Without a fix, the simplest workaround I can think of would be to restructure the hierarchy. It can't use postable nodes, so Cmmt_Item A  - Node from my example would need to be converted into a text node and the postable characteristic added as a child on the same level as the B and C leaves.
    This looks like it would affect anyone using hierarchies with postable nodes in a WebI report.
    Another oddity in WebI's behavior here - even though the postable nodes show incorrect sums the sum at the root node is correct. So extending my examples from the original post:
    Root Node                    $100
        Cmmt_Item A - Node       $150
            Cmmt_Item B - Leaf    $20
            Cmmt_Item C - Leaf    $30

  • HYPERION  WEB ANALYSIS STUDIO RELEASE 1 1 . 1 . 2 .

    Hi
    I have an issue regarding Hyperion Web analysis Studio Release 11.1.2 I am using ESSBASE as a data source.
    I want to make a report that aggregated rows between two Dimensions (lob and Accounts).
    Instead of adding two dimensions in the row section I want one cell the aggregates the two dimensions
    The value of the cell is the result of the intersection between the two dimensions.
    For example instead of having
    Account| LOB
    I want a cell that calculate (LOB->Account) for each row
    thanks
    Edited by: 840149 on Feb 27, 2011 8:45 AM

    You may have more success posting in the hyperion reporting forum :- Hyperion Query and Reporting
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to get Query Results based on Analysis Authorization Ranges????

    Hi Experts,
    I have gone through the lot of SDN Links, however not able to find the answer to my question.
    I have an Authorization Issue, “NO Authorization “
    Error : EYE 007 ( Insufficient Authorizations )
    <b>Here is the issue:</b>
    Need to see the complete query result when I gave the range in Analysis Authorization for Controlling Area 001-005. Controlling Area is auth relevant and right now a variable is inserted in the query for it. If I select Controlling Area 001, the result for Controlling Area 001 is displayed in query. If 002 then also displayed. If I do not enter anything, then I get the <b>Eye 007 error message</b>.
    I am not sure how do I display/authorize the entire result in the query for all the Controlling Areas, I have authorized user to see??
    <b>Its really urgent, please help..!</b>
    Here are the logs:
    Authorization Check Log
    Date and Execution Time (Local Server)
    Execution Date: 06.09.2007
    Execution Time: 14:48:41
    Executed Query: 0CCA_C11/GBCCA_MP01_Q0002_AP
    Executed by User ZBI_TEST_001
    Executed with Analysis Authorizations of Another User ZBI_TEST_001
      InfoProvider Check  
    Building the Buffer...
    ...Buffer Built
    Are there authorizations for accessing InfoProvider 0CCA_C11 with activity 03?
    Authorization exists for general access to InfoProvider 0CCA_C11 with activity 03 
      InfoProvider Check  
    Authorization exists for general access to InfoProvider 0CCA_C11 with activity 03 
      Relevant Characteristics for Detailed Authorization Check  
    (Characteristics with Full Authorization Are Not Listed!)
      List of Effective Authorization-Relevant Characteristics for InfoProvider 0CCA_C11:  
    0CO_AREA 
    0TCAACTVT 
      Relevant Characteristics for Detailed Authorization Check  
    (Characteristics with Full Authorization Are Not Listed!)
      List of Effective Authorization-Relevant Characteristics for InfoProvider :  
    List Is Empty:
      There Are No Characteristics That Have to Be Checked in Detail  
      Authorization Check  
      Detail Check for InfoProvider 0CCA_C11  
      Preprocessing:  
    Selection Checked for Consistency, Preprocessed and Supplemented As Needed
    Subselection (Technical SUBNR) 1
    Check Node Definitions and Value Authorizations...
    Node- and Value Authorizations Are OK
    End of Preprocessing
    Filling the Buffer...
    ...Buffer Filled
      Main Check:  
      Subselection (Technical SUBNR) 1  
    Supplementation of Selection for Aggregated Characteristics
      No Check for Aggregation Authorization Required  
    Following Set Is Checked  Comparison with Following Authorized Set  Result  Remaining Set 
    Characteristic  Contents 
    0CO_AREA
    0TCAACTVT
    SQL Format:
    CO_AREA = '0003'
    AND TCAACTVT = '03'
    Characteristic  Contents 
    0CO_AREA  I BT 0001 0005
    0TCAACTVT  I EQ 03
    I EQ 16
    Authorized   
      Subselection (SUBNR) Is Authorized  
      Authorization Check Complete  
      Authorization Check  
      Detail Check for InfoProvider 0CCA_C11  
      Preprocessing:  
    Selection Checked for Consistency, Preprocessed and Supplemented As Needed
    Subselection (Technical SUBNR) 1
    Check Node Definitions and Value Authorizations...
    Node- and Value Authorizations Are OK
    End of Preprocessing
    Filling the Buffer...
    ...Buffer Filled
      Main Check:  
      Subselection (Technical SUBNR) 1  
    Supplementation of Selection for Aggregated Characteristics
      No Check for Aggregation Authorization Required  
    Following Set Is Checked  Comparison with Following Authorized Set  Result  Remaining Set 
    Characteristic  Contents 
    0CO_AREA
    0TCAACTVT
    SQL Format:
    TCAACTVT = '03'
    Characteristic  Contents 
    0CO_AREA  I BT 0001 0005
    0TCAACTVT  I EQ 03
    I EQ 16
    Partially or Fully Authorized (Intersection)   Characteristic  Contents 
    0CO_AREA
    0TCAACTVT
    SQL Format:
    ( CO_AREA < '0001'
    OR CO_AREA > '0005' )
    AND TCAACTVT = '03'
    Value selection partially authorized. Check of remainder at end
    Following Set Is Checked  Comparison with Following Authorized Set  Result  Remaining Set 
    Characteristic  Contents 
    0CO_AREA
    0TCAACTVT
    SQL Format:
    ( CO_AREA < '0001'
    OR CO_AREA > '0005' )
    AND TCAACTVT = '03'
    Characteristic  Contents 
    0CO_AREA  I BT 0001 0005
    0TCAACTVT  I EQ 03
    I EQ 16
    Not Authorized   
    All Authorizations Tested
      Message EYE007: You do not have sufficient authorization  
      No Sufficient Authorization for This Subselection (SUBNR)  
    Following CHANMIDs Are Affected:
    184 ( 0CO_AREA )
      Authorization Check Complete  

    Hi,
        Have you defined the vaule for 0CO_AREA as BT 001-005 in you Authorization for 0CO_AREA.Also how have you defined your Authorization Variable on the query? Have you define as select options or interval? I thing you need to define it as interval or select options.
    Hope it helps,
    Cheers,
    Balaji

  • Analysis Authorization with SEM-BPS

    Hi,
    We have performed technical upgrade from BW 3.5 to BI 7.0. We want to migrate to BI 7.0 functionality phase wise.
    We have SEM-BPS and now we want to migrate to Analysis Authorization of BI 7.0.
    Once we have igrated to Analysis Authorization, will there be any impact on SEM-BPS? Can we still use SEM-BPS with New Analysis Authorizations? We do not want to move to BI-IP in near future?.
    Please advise.
    Best Regards,
    UR

    Dear UR,
    Iu2019m going to try helping you,
    In difference of reporting functionality, in planning, the data of an InfoCube is not just read; it is also changed or created.
    There are two planning tools in BI: BW-BPS (Business Planning and Simulation), and BI Integrated Planning.
    There are two main tcode: BPS0 and RSPLAN
    There are three authorization objects to manage Integrated Planning:
    S_RS_PL_ADMIN - Planning Administrator
    S_RS_PL_PLANNER u2013 Planner
    S_RS_PL_PLANMOD_D u2013 Planning Modeler (Development System)
    The main object in the planning scenario is InfoCube real-time, where can available writing in small package that arrive in parallel. In some cases the security requirements for reporting and planning can be merging. In this case you need authorization object for checking planning, as authorization object above, and you need authorization object for using a query for planning requires as S_RS_COMP.
    In addition to authorization for displaying data, the authorizations for changing data you need analysis authorization (the analysis authorization focus in the InfoProvider, no in Aggregation Level).
    In your analysis authorization design for reporting stuff, you should use in 0TCAACTVT characteristic 03 value. In the planning stuff, you should use in 0TCAACTVT characteristic 03 and 02 values. As explain following:
    Using the characteristics 0TCAACTVT (activity), you can restrict the authorization to different activities. Read (03) is set as the default activity; you must also assign the activity Change (02) for integrated planning.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/b1/0c9441b8972e7be10000000a1550b0/frameset.htm
    I hope this suggestion can help you answer question,
    Luis

  • Analysis Office vs Webi report ---Bex query Data mismatch

    I have one Bex query.
    I insterted the query in Analysis office for Microsoft edition, and its pulling 985 records, but when i inserted the bex query in Webi report its pulling only 27 records. Any body faced similar problem, explain the solution..?
    Using BICS connction for webi report development.
    Environment: BI 4.1 SP5
    Anlaysis office version :  1.4 SP10

    Hi Ingo,
            If we make it database delegated then the Webi query returns no results. (Blank results). Moreover there is no exception aggregation
    We have tried the following and getting the result correctly,
    In the Bex Query We had a Time variable which was based on SAP exit. We removed the SAP exit variable from Bex Query and restricted the period for atime range.
    Then created a Prompt in Webi report for the time
    The result is matching.
    But still failed to undrstand this because other queries which have SAP exit variables works fine.
    Still working around for a convincing solution for this.. Will let you know
    Thanks and Regards
    Rajesh

  • How to carry over values differently (aggregated or not), or is it possible

    Dear all:
    In our Account Dimension, we have Beginning Inventory Qty, Ending Inventory Qty, Purchased Amt, Sold Amt, etc.
    When we run reports with calculated time members, such as 2008.Total, 2008.H2, 2008.Q4, etc, most of our Account displays correctly with proper aggregation (like Purchased Qty, Sold Qty is summation of all child months'). But in particular for Beginning Inventory Qty and Beginning Inventory Amount is where I am having trouble.
    For instance, for Ending Inventory Qty, we assigned ACCTYPE AST, so the report ignores aggregation and carries the last month's value to the calculated time member. However, for Beginning Inventory Qty and Amount, I wish to assign the values of the first month. If it is 2008.H2, then 2008.H2 Beginning Inventory Qty should be the same as in 2008.07. Another example would be 2008.Q4's Beginning Inventory Amount being the same as 2008.10's.
    Is this doable in BPC?
    Thank you!
    Brian

    Hi Brian,
    It may be possible to design the opening balances in the way that you propose here. To get the correct results in your approach, you will be relying in large part on the cube and Analysis Services to figure out -- for certain intersections of accounts & time levels -- that it should look to the opening period rather than the standard time dim aggregation. I assume that would require some fancy MDX programming of a custom measure.
    I would recommend a different approach -- in particular if you plan to use the consolidation engine business rules -- that you store the opening balances in each of the 12 months. This makes it easy for the time-dim aggregation: for Q3, it looks at Sep; for H2, it looks at Dec, and so on -- and this is exactly what the cube does naturally. Sure, it may feel like some data duplication for no valid business purposes, but in some cases we master the technology, and in other cases the technology masters us. This is one of the latter cases.
    So then the question is, which opening balances? This can be tricky. It comes down to the question, are opening balances for each month = the closing balance from prior year December (or whatever your final period of the fiscal year is), or are they = the closing balance of the prior month?
    BPC can handle it either way, and for a legal consol I would always recommend the first approach (prior year December). For a planning app it depends on the business requirements for B/S planning. The copy opening balances business rule is designed for the former. The account transformation rules can be used to handle the latter (and also support redirection across the flow-type dimension), with a source period = -1.
    It also depends in part on your application's data storage type (YTD or periodic), particularly if you expect the system to create a cash flow statement for you, where the P&L data comes into play too.
    As Petar mentioned, you should also carefully consider using a flow / movement / accdetail dimension. It's not always helpful, but if there are lots of accounts that you're tracking movements on, it can be worthwhile -- even if it means a lot of mapping of the ERP COA to the different movements. It's not intuitive for people who haven't spent a lot of time with their heads in the OLAP cube, and are used to a flat COA concept. It it can be very powerful, since the consolidation engine takes enormous advantage of this. It makes the setup of the "copy opening balances" logic rules very, very simple. And it could either complicate, or simplify, some report layouts.
    Usually it's one of those dimensions (like datasrc) that customers only really start to appreciate after 6 months on the project.
    Hope that helps.... and if you do succeed in your initial proposed approach, I'd be very curious to hear how you did it.
    Regards,
    Tim

  • Permanently change default error configuration in Analysis Services 2005

    Hi,
    Currently, I am working on a BPC 5.1 application.  The data for this application is loaded(inserted via SQL statement) right to the FACT table and then a full process is run for that cube via an SSIS package using the Analysis Services Processing Task.  Often records are loaded this way where a dimension member for some of the records has not been added to the Account dimension yet.  These records after loading are considered 'orphan records' until the accounts are added to the account dimension.
    This loading process is used because of the volume of records loaded(over 2 million at a time) and the timing of the company's business process.  They will receive data sometimes weeks before the account dimension is updated in BPC with the new dimension members.
    If I try and process the application from the BPC Administration area with these orphan records in the FACT table, the processing stops and an error displays.  Then when I process the cube from Analysis services, an error is displayed telling me that orphan data was found.
    A temporary work-around is to go into the cube properties in Analysis Services 2005, click on Error Configuration, uncheck 'Use default error configuration' and select 'Ignore errors'. Then you can process the application from BPC's Administration page successfully.  But, the problem is that after processing the application successfully, the Analysis Services Error Configuration automatically switches back from 'Ignore errors' to 'Use default error configuration'.
    Does anyone have any suggestions on how to permanently keep the 'Ignore errors' configuration selected so it does not automatically switch back to 'Use default error configuration'?  Prior to BPC 5.0 this was not occurring.
    Also, does anyone know why this was changed in BPC 5.0/5.1?
    Thanks,
    Glenn

    Hi Glenn,
    I understood the problem but I can say that it was a bad migration of appset from 4.2 to 5.0.
    Any way they are using a dts package to import data into our fact table. That's means they have to add another step into that package where they have to do the verfications of records before to insert into fact table. Verfications can be done using the same mechanism from our standard import. Just edit that package and add similar steps into customer package.
    Attention you need somebody with experience developing DTS packages with for BPC to avoid other problems.
    One of big benefits from 5.X compare with 4.2 was the fact that we are able to use optimization schema and aggregations for cubes.
    Heaving that orphan records it is not possible to use optimization schema for cubes and you are not able to create good aggregation into your cube.
    So my idea is to provide all these information to customer and to try to modify that package instead to enable that option which can cause many other issues.
    Sorin

Maybe you are looking for

  • Resolution problems - rendering?

    I'm using iMovie to cut a project. The titling and photos and I think videos are coming out really poor in terms of resolution. In the Preview when I choose which style of title, the resolution is crisp. Then I drag it in to the time line, it plays b

  • How to "Load" Applet class to current Applet?

    I have to write a simple Applet to detect the JVM version of the browser, and the flow is: 1.Detector.class (Applet) load in a html page, its class version is 1.5 2.It checks whether the JVM version of the browser is 1.5 / 1.6. 3.If it is 1.5, a new

  • Dependent Requirements report required

    Hi All I'm searching for transaction which will generate a report with total of dependent requirements for a given FG material list: Do you know if there are such type of transactions in SAP? Kind Regards Andrey

  • Auto Publish .flas from Web?

    Man . . . . these are quiet forums. I have two questions: 1. Does anyone know if there is a way to automatically force a .fla to publish using a web interface? For example, say I had a file called content.fla up on the server, and a web page called c

  • Import keyboard

    package bill_of_sale; import cs1.Keyboard; import java.text.DecimalFormat; public class BillOfSaleClass     static final double PC_PRICE = 999;     static final double PRINTER_PRICE = 199, TAX_PRICE = .075;     static final double MONITOR_PRICE=399,