Sliced metrics with non-additive measures

Post Author: jmdupreez
CA Forum: Performance Management and Dashboards
I have a sliced metric for a actual say X with the sliced dimensions A,B and C. This is an additive metric, thus A + B + C rolls up into the global slice. I have used the CI_PROBE universe to take the aggregated values of X to determine the average value say Y on a 7 day rolling basis for each day for each slice of X (A,B,C) in so to create a means of having thresholds based on these values. The issue is that these measures do not roll up as they are averages.... Does anyone know a better way of creating this scenario without having the global slice rolled up incorrectly. Thanks

Thanks for replying. Since the requirement needed diff values at different levels, in the calc script, I fixed on dimensionsI want to include & it works great for a really small set of data. When I load the actual data file & try to calc, it comes up with the following essbase error.I tried to increase the datacache size but the error does not go away. Does anyone have any suggestion on why it works for a subset of data and what needs to be done to over come the error.Thanks a lot.-Nithya[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Serious Error(1008110)Unable to Allocate Memory. Refer to the Application Log for details.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008112)Memory Allocation error codes: O/S error = [12], O/S return code = [0].[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008114)Allocation request for [1481760000] bytes of virtual memory. Percentage of memory in use is [28%].[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008116)Total swap space is [25652363264] bytes. Available swap space is [18546425856] bytes.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008119)Memory page size is [4096] bytes.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Error(1019003)Unable To Read Information From [db2olap/app/CPTest2/Cptest2/Cptest2.otn][Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Error(1007119)Loading New Outline for Database [Cptest2] Failed[Fri Apr  5 09:59:19 2002]Local/CPTest2/Cptest2/nvenka/Error(1042012)Network error [32]: Cannot Send Data

Similar Messages

  • Cube with non-additive measures

    Dimension "Dim_Geography" has levels ZIP_CODE -> CITY -> STATE
    e.g. 12345 -> Schenectady -> NY
    Dimension "Dim_Store" has levels STORE -> STORE_BRAND -> STORE_LINE
    e.g. "Cathy's Fashions #1234" -> "Cathy's Fashions" -> "Clothing, Women's"
    Dimension "Dim_Calendar" has levels DAY -> WEEK -> MONTH -> QUARTER -> YEAR (assume 4-4-5 calendar to make roll-up work nicely)
    (No example given, it's just a calendar)
    Fact "Fact_Visits" has dimensionality <Dim_Geography, Dim_Store, Dim_Calendar> and measures Visitor_Count and Visitor_Value.
    The fact is based on a table called "TBL_VISITS" having columns (Geog_ID, Store_ID, Cal_ID, Person_ID, Visitor_Value, Visit_Timestamp), all of which are integers except the last which is a timestamp.
    Visitor Value is a number that was calculated earlier and represents a unit value of the visitor. Visitor Value changes daily for each Person_ID. When querying for any level of Calendar other than DAY, the correct Visitor Value is the last one seen for the Person_ID within a given week/month/quarter/year. If no level of Calendar is queried, the last Visitor Value is the correct one.
    A row in this table represents a single visit of the person to the store in that zip code. A given Person_ID can appear many times in a day in any given store in any given zip code. Assume we can tell visitors apart so that we can populate "Person_ID" properly.
    I can create a series of views that collapses TBL_VISITS to obtain the correct Visitor_Count and Visitor_Value at any combination of dimensionality. (Obviously, I can't create a view that collapses them all.)
    Example:
    CREATE VW_FACT_BY_DAY_STORE AS
      SELECT cal_id, store_id, SUM(v_visitor_value) as visitor_value, SUM(v_visitor_count) as visitor_count
        FROM (SELECT cal_id, store_id, person_id, SUM(visitor_value) / SUM(1) as v_visitor_value, SUM(1) as visitor_count
                    FROM tbl_visits
                  GROUP BY cal_id_store_id, person_id
                 )The more complicated views (i.e. views omitting or having higher calendar dimensionality), I can compose with analytical functions. The number of views is big (essentially, all possible combinations of dimension levels), but not unmanageable.
    A long winded setup for a bunch of short questions:
    1. How do I specify all that in AWM?
    2. Can AWM handle the outer joins for views that drop out one or more dimensions?
    3. Is there anything special I should do?
    4. Am I crazy for wanting to do this???
    THANKS !!!

    Thanks for replying. Since the requirement needed diff values at different levels, in the calc script, I fixed on dimensionsI want to include & it works great for a really small set of data. When I load the actual data file & try to calc, it comes up with the following essbase error.I tried to increase the datacache size but the error does not go away. Does anyone have any suggestion on why it works for a subset of data and what needs to be done to over come the error.Thanks a lot.-Nithya[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Serious Error(1008110)Unable to Allocate Memory. Refer to the Application Log for details.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008112)Memory Allocation error codes: O/S error = [12], O/S return code = [0].[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008114)Allocation request for [1481760000] bytes of virtual memory. Percentage of memory in use is [28%].[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008116)Total swap space is [25652363264] bytes. Available swap space is [18546425856] bytes.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Info(1008119)Memory page size is [4096] bytes.[Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Error(1019003)Unable To Read Information From [db2olap/app/CPTest2/Cptest2/Cptest2.otn][Fri Apr  5 09:59:18 2002]Local/CPTest2/Cptest2/nvenka/Error(1007119)Loading New Outline for Database [Cptest2] Failed[Fri Apr  5 09:59:19 2002]Local/CPTest2/Cptest2/nvenka/Error(1042012)Network error [32]: Cannot Send Data

  • Non-additive measure

    Hi,
    I'm just learning OBIEE and have this logical modelling question.
    Let's say that I have a fact table. The grain of this fact table is day/product/price. So there is one row per day per product and the price of the product on that day. Price obviously is a non-additive measure. The nature of this measure is actually the problem....it can't be summed across time. Lets call this table price_history table.
    In BMM, I have a time dimension and product dimension and the above described fact table.
    Here's what I want to acheive:
    An answers request which will show me the price at the end of the period. For example, if I select Year from the time dimension, Product Name from the product dimension, and then price from the price_history table, I should get:
    2008 Product A $1.00
    2008 Product B $2.00
    2009 Product A $3.00
    2009 Product B $4.00
    where $1.00 is the price of Product A on 12/31/2008, $2.00 is the price of product B on 12/31/2008, $3 is the price of product A on 9/22/2009 (9/22/2009 is the latest row for Product A in price_history table), $4 is the price of product B on 9/22/2009 (9/22/2009 is the latest row for Product B in price_history table)
    Subsequently, I want to click on the year, and have OBIEE drill down into the data, and split it by Quarters:
    2008 Q1 Product A $1.25 ( the price at the end of Q1 2008)
    2008 Q2 Product A $1.27 ( the price at the end of Q1 2008)
    2009 Q4 Product B $8.00 (the most current price)
    I know this is a lot.....but does anybody have an idea on how one would go about modelling this situation???

    the problem is I want the measure to be additive across total product/prod category/sub prod category (as long as the sum involves prices for only ONE day). But across time I want to apply the LAST aggregation method. In other words, the SUM of all product prices with rollup to PROD CATEGORY/SUBCAT is a valid business query as long as the TIME dimension grain is a day. When the time dimension grain goes up month/quarter/year then I want the LAST price to be picked up for the time frame involved and then summed up to total product/prod category/subcategory.
    Going back to what you suggested. If I create a level based measure (with LAST aggregation) at the product subcategory level, it will only pick up the PRICE for the LAST product in that subcategory which is not what I want. I want subcategory to be the sum of all prices for the LAST day in the time period selected (the last day of the month, quarter, year, day).
    I'm beginning to suspect that this is not possible because there's a flaw in data model.....Any help is appreciated.

  • "Semi/Non additive measure with" : "LastChild Across All Dimension" OR "LastChild Across Time and MAX across All Other Dimension"

    Hey All
    I am using the SQL SERVER 2008R2 standard edition so please be mindful when posting any suggestion if that will work on this edition or not.
    Here is the situation: I have Cube with one fact table "Positions". Measure I am querying is "AUM" and Dimensions across user can query are combinations of {"Portfolio", "Security" and "Time"}.
    I am receiving the "AUM" values at "portfolio+Security+Time" level. AUM is actually the Portfolio level value. so sample data I receive would like this.
    PORTFOLIO SECURITY TME AUM
    A SEC1 JAN-1 100
    A SEC2 JAN-1 100
    B SEC1 JAN-1 200
    B SEC3 JAN-1 200
    A SEC1 JAN-2 300
    A SEC2 JAN-2 300
    B SEC1 JAN-2 400
    B SEC3 JAN-2 400
    So, as shown in above example , we received same "AUM" value for a given portfolio and time. (regardless of what security value is).  
    And as AUM is the snapshot values, we can not sum across time as well, so if user is looking across time we need to pick Last values.
    Now when user wants to view this data from cube, two different ways s/he wants to look at it.
    option one: AUM at port+sec+time level. that is same as shown above in example.
    option two: AUM at portfolio level only,  (notice that this is across time so results are picking values for time=JAN-2)
    PORTFOLIO AUM
    A 300
    B 400
    GRAND TOTAL 700
    I have applied the "LastChild" aggregation property to measure to achieve "Do no Sum across Time "behavior. but with this I also get "SUM Across Except time" and hence my AUM values get summed across Portfolio and security as
    well. I have also enable Time Intelligence to enable MTD, YTD calculations.
    I tried to add SCOPE statement for Security before and after the "time calculations" yet I am not able to achieve expected behavior.
    here is the calculated member definition, and scope statement i have tried.
    CREATE MEMBER
    CURRENTCUBE.[MEASURES].[AUMNetBaseCurrency] AS
    [Measures].[AUMNetBaseCurrency_Raw_Max],
    FORMAT_STRING = "#,##0.00 ;( #,##0.00 )",
    VISIBLE = 1 , DISPLAY_FOLDER = 'Position';
    After this I have MTD and YTD time calculation for few measures including AUM.
    I tried placing the following SCOPE for AUM before and after the Time calculations but it no difference in result, in fact it seems like it has no affect of scope at all due to the "LastChild" aggregate property applied on measure.
    SCOPE([MEASURES].[AUMNetBaseCurrency]);
    SCOPE([Security].[Symbol].[All]);
    THIS = MAX([Measures].[AUMNetBaseCurrency_Raw_Max]);
    END SCOPE;
    END SCOPE;
    so options I am thinking of but do not have clear idea how to implement those are:
    0) why my Security Scope Statement are not showing any effect, and can we do something about it?
    1) Can be change "SUM" behavior to "MAX" / "MIN" across all Dimension except time.
    2) Is there a way to implement "LastChild" across all dimension instead only across Time.
    3) Should I need to make any changes at Fact table level to achieve this ? I refer one Q&A in this forum that suggested to create such measure in different measure group but it doesn't have any clarification.
    Any help would be appreciated.
    Thanks
    Anup Shah

    hi,
    you will have to enable cache in your nqsconfig.ini file under the section
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     = YES;
    cache seeding is also a mechanism through which the data would be updated in the cache.

  • Best Way To Handle Non Additive Measures

    I am developing a cube looking at total reservations created against a daily allocation by customer. So lets say I am looking at these measures at the lowest level and show location X for Customer Y with an allocation of 100 and total reservations made as 50. Everything here is OK. My issue comes when I roll up to look at just the location. It is taking the allocation and summing this value but I need this value to remain static at all levels. Is there a way to set an accounts measure to never aggregate? I have tried a few different settings such as NEVER SHARE and setting the member aggregation properties to <NONE> in my OLAP model and it continues to aggregate at all levels. I have alos tried adding this as a dimension but because the value is numeric and because I have a few additional allocation measures that can have the same values, I have issues with duplicates. As additional info.....I am building this using EIS. It's entirely possible that I am approaching this the wrong way so any feedback would be appreciated. I can provide more detail if needed.
    Thanks,
    Bob

    Why don't you put the Account that stores the allocated amount in its own special hierarchy? This hierarchy might have Gen2 parent called "Allocate" with a label only tag and below that a series of ~ tagged members underneath. Give it a goofy name so that there can be no question of the Account's purpose, e.g., Reservations - To Be Allocated.
    Your post doesn't indicate what tool you're using for input, but have a separate sheet/form/schedule for the input of the amount to be allocated, have the user enter that amount, save it, and have a calc/HBR launch on save that does the allocation.
    Then your second view of the data (form, report, etc.) doesn't include that Account and no-one's the wiser. You haven't lost original input data and since the forecaster looks at the "real" Accounts hierarchy except when inputting data to be allocated, he'll see the spread numbers only.
    The only thing I might add to this approach is a series of dedicated Location members that receive the allocated number but that's really a design preference more than anything.
    Regards,
    Cameron Lackpour

  • Obiee report across non additive fact tables

    I have two fact tables (f1,f2) joined two conformed dimensions d1,d2
    d1,d2 have simple hierarchy total and detail levels
    f1 --->d1
    f1---->d2
    f2---->d1
    f2---->d2
    facts tabel layout
    f1 -- c1&c2 are(additive measures)c3&c4 are(non additive measures)
    f2----col1&col2 are (additive measures)col3&col4(non additive measures)
    I have tried to construct a report with c1,c2,col1,col2 and it works fine
    when i try to do c1,c2,c3,c4,col1,col4 second fact table cols are null.
    Can someone please guide how to resolve this issue.

    there is a rule saying When you are combining multiple fact tables(stars) in a single report, the measures should be aggregated. This is like best practice.
    So in your scenario, you have to create 2 logical dimensions from the fact tables.
    Create logical dimensions F1D with source physical F1 and add only non aggregated measure(c3,c4), Create a hierarchy with total and detail level.
    Create logical dimensions F2D with source physical F2 and add only non aggregated measure(col3,col4), Create a hierarchy with total and detail level.
    then for fact table f1 , set level F1d - Detail, F2d - total.
    then for fact table f2 , set level F1d - total, F2d - detail.
    Update your presentation layer. Make sure these non aggregated columns coming from F1d,F2d instead of logical fact tables.
    check for consistency.,Now save,reload/restart server. Check your reports, thats it.
    thanks
    Madan Thota

  • Additive/semi-additive measure HELP

    I have a cube with dimensions TIME, GEOGRAPHY AND SALES_AGENT, built on a fact table that has
    TIME_LEAVES,
    GEOGRAPHY_LEAVES,
    AGENT_LEAVES as FK's
    and measures
    SOLD_QUANTITY quantity of t-shirts sold
    UNIT_PRICE the agent's price for one t-shirt
    While SOLD_QUANTITY is fully additive, being SUM aggregated in the cube for all three dimensions, the UNIT_PRICE in totally non additive, so I let aggregation method to none for all dimensions.
    Afterwards I made a calculation in JDev BIBeans multiplying UNIT_PRICE * SOLD_QUANTITY to get an income quantum, thinking it'll do something like (UNIT_PRICE1 * SOLD_QUANTITIY1) + (UNIT_PRICE2 * SOLD_QUANTITIY2) + ...
    But the results go sky high, showing that it's not calculating the way I wanted it.
    Do you have any idea? Is it possible to put a non additive measure with a fully additive one in the same cube? If so, how should I set the aggregation methods.

    It's a relational cube.
    Here is an example of a couple of rows in the fact table:
    TIME GEOGRAPHY AGENT SOLD_QUANTITY UNIT_PRICE
    ========================================================
    JAN04 LONDON AGENT1 240 20
    JAN04 YORK AGENT1 123 20
    JAN04 LONDON AGENT2 56 25
    While sold quantity is fully SUM aggregable by all dimensions, the unit price is not (the cube shows NONE for all aggregations in OEM). Even so, the UNIT_PRICE add's itself on all three dimensions.
    So instead of giving me a total income situation per agent of:
    AGENT1 (240*20)+(123*20)
    AGENT2 (56*25)
    it returns something like
    AGENT1 (240+123)*(20+20)
    AGENT2 56*25

  • Issue with non calculated column in a fact table

    Hi All,
    With 3 facts(Fact1,Fact2,Fact3) and 2 Confirmed Dimensions my joins work fine in Criteria when I include All calculated columns from facts. If I try to include a non calculated column from Fact1(Which is a number Data type) Columns from Fact2 and Fact3 show Null values. I know it is not recommended to include dimension columns in fact , does OBIEE not support Number type non calculated columns as well? Is there any work around that I can bring in my non calculated column from Fact and still get results for other fact columns.Iam at 11.1.1.7 of OBIEE
    Let me know if Iam not clear.
    Your help is much Appreciated.
    Thanks,
    Vineela.

    i would like to add 2 fields into my fact tables - LOAD ID (populated by a sequence during each load) and LOAD DATE.
    as these fields are not related to any reporting dimensions, it is still possible to add them in OWB ? the fact wizard always ask for a foreign key to a dimension ...
    Duncan,
    If you want to add non dimensional attributes to a fact by using OWB, you can create additional measures to it and use them as attributes.
    Igor

  • Agg tables with non-Sum aggregation type

    Situation: We have a fact called Ending Cash Balance that is non-additive across time. Each month, it shows the cash on hand at the end of the month. So our aggregation method is "Last" for the Time dimension (i.e. the value for Q1 = March, value for year = Dec), and "Sum" for all Other dimensions.
    Using the Aggregation Wizard, we've built an agg table that contains the correct values for Ending Cash Balance (abbrev. ECB), rolled up along all levels properly.
    However, our Answers query will NOT use that agg table when querying ECB. In fact, our logical table contains ECB (non-additive along Time) and Capital Spending (additive along time, so the agg method is Sum). When we query Capital Spending by month, the query hits the agg table. When we query ECB by month, the query refuses to hit the agg table. The only difference between the two fact columns is the aggregation method along the Time dimension.
    The agg table is perfectly good, but the query will not use it for the fact that has a "non-sum" aggregation method. Any ideas?

    Mark, OBIEE repositories from ver 10.1.3.x allows for an flag "Data is Dense" to be set up in the aggregation tab of the measure (fact) in question. Please check if this allows you to get the last along time calculation to occur based on results taken from Aggregate table instead of base level fact table. Read up help on this option also.
    With this option turned on, I expect measure to get aggregated as follows:
    Base level Fact: Day level
    Aggregate Table: Month level (say)
    Query at Month level => should fetch results from Agg table (sum along other dimensions as reqd)
    Query at Quarter level => should fetch results from Agg table (sum along all other dimensions except time as reqd) and choose the last month of Quarter for showing value at Quarter level.
    Also experiment with using Aggregation method in Answers (not rpd) as "Server (default)" if using in pivot. Sometimes the results are correct from db to rpd (mid tier) but the front end aggregation spoils the report.
    HTH
    Shankar

  • Sliced metrics

    Post Author: R32CA
    CA Forum: Analytics
    I see this subject mentioned before but don't see any resolutions......can anybody tell me why I would be getting a "Missing Prompts" error message when I try to create a dimension?
    One reply did say that syntax is missing in the measure 'where clause' of the universe (created in Designer).......can anybody give me an example of a 'where clause' that has the proper syntax?
    the end result I need is to create a sliced metric that I can drill down each (quarter) and see the total (charges) per department.... but I can't get the dimension to work......... the sliced metric to work.........and therefore can't get the drilldown feature on the pie chart to work.............any information on whether this is possible or how to do it would be greatly appreciated.

    Post Author: Ferhat
    CA Forum: Analytics
    you are exactly right in doing it. but you must create a new metric with dimension
    and then  you should create a analytic. take care...

  • EtreCheck version: 2.1.5 (108) Report generated 4 January 2015 14:29:26 GMT  Click the [Support] links for help with non-Apple products. Click the [Details] links for more information about that line. Click the [Adware] links for help removing adware

    My Mac is very slow and applications take a long time to load, especially Safari and iTunes.  Please help.    I have run the Etrecheck report and these are results.
    Thanks Pat
    EtreCheck version: 2.1.5 (108)
    Report generated 4 January 2015 14:29:26 GMT
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
      iMac (21.5-inch, Mid 2011) (Verified)
      iMac - model: iMac12,1
      1 2.7 GHz Intel Core i5 CPU: 4-core
      4 GB RAM Upgradeable
      BANK 0/DIMM0
      2 GB DDR3 1333 MHz ok
      BANK 1/DIMM0
      2 GB DDR3 1333 MHz ok
      BANK 0/DIMM1
      empty empty empty empty
      BANK 1/DIMM1
      empty empty empty empty
      Bluetooth: Old - Handoff/Airdrop2 not supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      AMD Radeon HD 6770M - VRAM: 512 MB
      iMac 1920 x 1080
    System Software: ℹ️
      OS X 10.10.1 (14B25) - Uptime: 0:32:50
    Disk Information: ℹ️
      ST31000528AS disk0 : (1 TB)
      EFI (disk0s1) <not mounted> : 210 MB
      Macintosh HD (disk0s2) / : 999.35 GB (717.51 GB free)
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
      OPTIARC DVD RW AD-5690H 
    USB Information: ℹ️
      Apple Inc. FaceTime HD Camera (Built-in)
      Seagate Expansion Desk 2 TB
      EFI (disk1s1) <not mounted> : 210 MB
      Seagate Expansion Drive (disk1s2) /Volumes/Seagate Expansion Drive : 2.00 TB (1.66 TB free)
      Apple Inc. BRCM2046 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. iPhone
      Apple Internal Memory Card Reader
      Apple Computer, Inc. IR Receiver
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Kernel Extensions: ℹ️
      /Library/Application Support/Avast/components/fileshield/unsigned
      [loaded] com.avast.AvastFileShield (2.1.0 - SDK 10.9) [Support]
      /Library/Application Support/Avast/components/proxy/unsigned
      [loaded] com.avast.PacketForwarder (2.0 - SDK 10.9) [Support]
    Problem System Launch Agents: ℹ️
      [failed] com.apple.syncservices.SyncServer.plist
    Launch Agents: ℹ️
      [loaded] com.avast.userinit.plist [Support]
      [running] com.epson.Epson_Low_Ink_Reminder.launcher.plist [Support]
      [loaded] com.epson.esua.launcher.plist [Support]
      [running] com.epson.eventmanager.agent.plist [Support]
      [loaded] com.oracle.java.Java-Updater.plist [Support]
      [running] com.trusteer.rapport.rapportd.plist [Support]
    Launch Daemons: ℹ️
      [loaded] com.adobe.fpsaud.plist [Support]
      [loaded] com.avast.init.plist [Support]
      [loaded] com.avast.uninstall.plist [Support]
      [failed] com.avast.update.plist [Support]
      [loaded] com.microsoft.office.licensing.helper.plist [Support]
      [loaded] com.oracle.java.Helper-Tool.plist [Support]
      [running] com.trusteer.rooks.rooksd.plist [Support]
    User Launch Agents: ℹ️
      [loaded] com.adobe.ARM.[...].plist [Support]
      [invalid?] com.avast.home.userinit.plist [Support]
      [running] com.microsoft.LaunchAgent.SyncServicesAgent.plist [Support]
    User Login Items: ℹ️
      iTunesHelper ApplicationHidden (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
    Internet Plug-ins: ℹ️
      FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
      Default Browser: Version: 600 - SDK 10.10
      AdobePDFViewerNPAPI: Version: 11.0.07 - SDK 10.6 [Support]
      AdobePDFViewer: Version: 11.0.07 - SDK 10.6 [Support]
      DivXBrowserPlugin: Version: 2.2 [Support]
      Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
      OVSHelper: Version: 1.1 [Support]
      QuickTime Plugin: Version: 7.7.3
      JavaAppletPlugin: Version: Java 8 Update 25 Check version
    Safari Extensions: ℹ️
      wrc [Installed]
    3rd Party Preference Panes: ℹ️
      DivX  [Support]
      Flash Player  [Support]
      Flip4Mac WMV  [Support]
      GoToMyPC Preferences  [Support]
      Java  [Support]
      Trusteer Endpoint Protection  [Support]
    Time Machine: ℹ️
      Skip System Files: NO
      Auto backup: YES
      Volumes being backed up:
      Macintosh HD: Disk size: 999.35 GB Disk used: 281.84 GB
      Destinations:
      Seagate Expansion Drive [Local]
      Total size: 2.00 TB
      Total number of backups: 78
      Oldest backup: 2013-07-28 18:09:06 +0000
      Last backup: 2015-01-04 14:29:38 +0000
      Size of backup disk: Adequate
      Backup size 2.00 TB > (Disk used 281.84 GB X 3)
    Top Processes by CPU: ℹ️
          2% WindowServer
          1% mds
          0% fontd
          0% mds_stores
          0% com.avast.daemon
    Top Processes by Memory: ℹ️
      120 MB Safari
      112 MB com.avast.daemon
      94 MB com.apple.WebKit.WebContent
      56 MB spindump
      52 MB mds_stores
    Virtual Memory Information: ℹ️
      479 MB Free RAM
      1.56 GB Active RAM
      1.11 GB Inactive RAM
      904 MB Wired RAM
      5.37 GB Page-ins
      75 MB Page-outs
    Diagnostics Information: ℹ️
      Jan 4, 2015, 01:57:18 PM Self test - passed
      Standard users cannot read /Library/Logs/DiagnosticReports.
      Run as an administrator account to see more information.

    patbythesea wrote:
    Can I assume that with my Mac I do not need any additional virus protection software?  If I do, what should I use?
    See my Mac Malware Guide for help on protecting yourself from malware. You generally don't need anti-virus software.
    (Fair disclosure: I may receive compensation from links to my sites, TheSafeMac.com and AdwareMedic.com, in the form of buttons allowing for donations. Donations are not required to use my site or software.)

  • Archive Stock Cube with non-cumulative nature

    Hi All,
    I have been trying to archive the Inventory (Stock) Infocube as we have more than couple of hundread million records in the cube and it is taking long time to execute query.
    I know how to acrive the regular cubes and DSOs, but I couldn't find any helpful documentation where I can know how to archive the Infocube with non-cumulative nature or containing Marker update.
    Can anyone pleaes post the link of document?
    BTW, we are using BI7.0 SP15
    Thanks,
    Dhaval

    Hi,
    There are two ways of archiving data from a non-cumulative InfoCube - time slot archiving and archiving with selection characteristics.
    Time slot archiving (recommended)
    As an archiving method, you must choose for a non-cumulative InfoCube, time slot archiving for the time reference characteristic. When you recreate an archiving object for a non-cumulative InfoCube, this characteristic is also suggested by the system.
    Only archive non-cumulative changes up to a specific point in the past. This reduces the volume of data in the InfoCube. The marker (= end non-cumulative) remains unchanged.
    In the variants maintenance screen, in the archive administration, under Non-Cumulatives, choose the Movements Only option.
    You are only able to archive the marker if you also archive all of the non-cumulative changes. Subsequently, the contents of the non-cumulative InfoCube are archived completely. In the variants maintenance screen, in the archive administration, under Non-Cumulatives, choose the Movements and End Non-cumulative option.
    You must archive the non-cumulative changes (for all times) first, and then archive the marker in a separate archiving run, so that the non-cumulative changes and the marker are stored separately in the archive.
    The advantage of time slot archiving is that archiving runs can be carried out periodically using a single variant with a relative time restriction. The alternative archiving method described below does not have this advantage.
    Archiving with selection characteristics
    In some cases, archiving based only on time slots does not have the required effect of reducing the volume of data in the InfoCube. This is the case, for example, if the non-cumulative bearing objects are subject to high fluctuations, meaning that data is constantly being loaded for new objects and old objects are made obsolete. With time slot archiving, the end non-cumulative remains for all objects until all of the data in the InfoCube is finally archived, even if all the non-cumulative changes for an object have been archived already.
    In this case, you want to archive and delete completely any objects that are no longer active in the InfoCube. This means archiving the marker as well. For other objects that are still active, you only need to archive the non-cumulative changes from the past, which means the marker is retained.
    When you define the archiving properties for a non-cumulative InfoCube, do not therefore choose the time slot archiving method. Instead, select individually the characteristics that are relevant to restricting archiving runs. The time reference characteristic should also be included in this selection whenever possible.
    Using this archiving method, for each archiving run, you need to create selection conditions in a variant. In the variant for the archiving run, in addition to the selection conditions, you can also specify for the selected characteristics whether or not non-cumulative changes and marker data is archived. If you choose to archive marker data, be aware that the time selection contains the maximum value for time.
    Please refer to the below link to archive the data for Stock Cube:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/3f/c219fe9a46194aa66d55254fdab182/frameset.htm
    Hope this helps.
    Regards
    Tanuj

  • Non additive aggregation -  custom defined aggregation possible in BW?

    I have the following problem:
    there is a key figure that is non-additive relative to one characteristic, e.g we need the third minimum as an aggregation for a time characteristic (there a 250 values for that characteristic).
    Is there a way to create user defined (exception) aggregation (like Var or mean value) by ABAP Coding?
    Message was edited by: Michael Walesch

    Does your database support analytic functions? Last and first are analytics functions. If your database does not support them, BI has to prepare selects with subqueries and this could slow down the response time
    Begoña

  • Ttf-hunky-fonts pkgbuild: bitstream vera with non Eng chars

    Saw it, built it, posting it :-)
    I admit it - not all characters are perfect. But this is the first version and author promised to repair it soon. Still, if you want to have Bitstream Vera TTF fonts (originally lacking nonEn chars) with additional characters ("Baltic, Central European, South European and other languages, including Maori, Welsh and Esperanto") you can try it. It can co-exist with other ttf fonts (including original Bitstream Vera). Font names are Hunky Serif and Hunky Sans.
    ttf-hunky-fonts PKGBUILD
    pkgname=ttf-hunky-fonts
    pkgver=0.1.0
    pkgrel=1
    pkgdesc="Bitstream Vera based TTF fonts with non-English characters"
    url="http://www.yoper.com/ariszlo/hunky.html"
    depends=('x-server')
    install=ttf-hunky-fonts.install
    source=(http://www.yoper.com/ariszlo/packages/SOURCES/hunkyfonts-$pkgver.tar.gz)
    md5sums=('082b8220b2112830f7e418451a2c2f78')
    build() {
    install -d -m755 $startdir/pkg/usr/X11R6/lib/X11/fonts/TTF
    install -m644 $startdir/src/hunkyfonts-$pkgver/TTF/*.ttf
    $startdir/pkg/usr/X11R6/lib/X11/fonts/TTF
    ttf-hunky-fonts.install
    # arg 1: the new package version
    post_install() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    # arg 1: the new package version
    # arg 2: the old package version
    post_upgrade() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    # arg 1: the old package version
    pre_remove() {
    /bin/true
    # arg 1: the old package version
    post_remove() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    op=$1
    shift
    $op $*

    Saw it, built it, posting it :-)
    I admit it - not all characters are perfect. But this is the first version and author promised to repair it soon. Still, if you want to have Bitstream Vera TTF fonts (originally lacking nonEn chars) with additional characters ("Baltic, Central European, South European and other languages, including Maori, Welsh and Esperanto") you can try it. It can co-exist with other ttf fonts (including original Bitstream Vera). Font names are Hunky Serif and Hunky Sans.
    ttf-hunky-fonts PKGBUILD
    pkgname=ttf-hunky-fonts
    pkgver=0.1.0
    pkgrel=1
    pkgdesc="Bitstream Vera based TTF fonts with non-English characters"
    url="http://www.yoper.com/ariszlo/hunky.html"
    depends=('x-server')
    install=ttf-hunky-fonts.install
    source=(http://www.yoper.com/ariszlo/packages/SOURCES/hunkyfonts-$pkgver.tar.gz)
    md5sums=('082b8220b2112830f7e418451a2c2f78')
    build() {
    install -d -m755 $startdir/pkg/usr/X11R6/lib/X11/fonts/TTF
    install -m644 $startdir/src/hunkyfonts-$pkgver/TTF/*.ttf
    $startdir/pkg/usr/X11R6/lib/X11/fonts/TTF
    ttf-hunky-fonts.install
    # arg 1: the new package version
    post_install() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    # arg 1: the new package version
    # arg 2: the old package version
    post_upgrade() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    # arg 1: the old package version
    pre_remove() {
    /bin/true
    # arg 1: the old package version
    post_remove() {
    echo "Rebuilding font cache..."
    /usr/bin/fc-cache
    echo "...done."
    op=$1
    shift
    $op $*

  • Specific request on how to model semi-additive measures

    Hi,
    I have a fact table with a "quantity" measure which is related to an "UoM" (unit of measure) dimension.
    I cannot sum quantities that are not the same UoM.
    Is there a way to instruct OBIEE (11g) not to sum quantities with different UoMs?
    Note that all other dimensions in the fact table are additive.
    Thanks in advance.

    Of course I can. But the question here is different - how I can prevent users to aggregate quantity measure without grouping by UoM. That is, a user can have quantity measure in a BI analysis and "forget" to include UoM attribute to correctly group data - the result is that they aggregate peaces, kilos, etc all together. That's what is happening in real case with my users, they would like to have quantities correctly summed up.
    Has anyone a suggestion on how to achieve this?
    Thanks a lot,
    Chris

Maybe you are looking for