OLAP considerations

So I was asked to look into OLAP, and generating cubes for a reporting tool. I kind of came accross documents at both ends of the difficulty spectrum. Could someone tell me if I'm overlooking anything, because my initial attempts at generating cubes support the "simple" end.
The "difficult" instructions lead me to believe that in 10g there must be some default setup already in place, because the ROLLUP and CUBE functions already work.
Are there scalability issues that need looking into, or are ROLLUP & CUBE just variations on GROUP BY, and so the same issues apply?
Simple instructions:
http://oraclebizint.wordpress.com/2007/08/30/oracle-10g11g-olap-cube-and-rollup/
Difficult instructions:
http://www.dbazine.com/datawarehouse/dw-articles/rittman7
My test data & queries:
create table olap_test
(product varchar2(20),
color varchar2(10),
num_ordered number(6)
insert into olap_test values ('gum','blue',25);
insert into olap_test values ('gum','green',30);
insert into olap_test values ('chocolate','dark',1);
insert into olap_test values ('soda','yellow',1);
insert into olap_test values ('soda','yellow',1);
insert into olap_test values ('soda','white',1);
select product, color , sum (num_ordered)
from olap_test
group by rollup (product),
         rollup (product,color);
select product, color, sum (num_ordered)
from olap_test
group by cube (product),
         cube (product,color);Thanks,
--=Chuck

There is an OLAP option to the database. For an introduction to that option you may want to check out this information:
[OLAP and Data Mining|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/bi.htm#sthref1954]
Brief Description:
The OLAP OptionThe OLAP option of the Oracle Database includes the following components:
* The OLAP analytic engine, which supports the selection and rapid calculation of multidimensional data within the Oracle Database.
* Analytic workspaces, which store data in a multidimensional format where it can be manipulated by the OLAP engine.
* Analytic Workspace Manager, a graphical user interface for creating and maintaining analytic workspaces.
* OLAP Worksheet, an interactive environment for executing OLAP DML, the data definition and manipulation language for interacting with analytic workspaces.
* Interfaces for developing OLAP applications in SQL and Java.
* OLAP Catalog, the metadata repository which represents a star schema as a logical cube. The OLAP Catalog enables OLAP applications to access relational data.
>
The ROLLUP and CUBE options are detailed in:
[SQL for Aggregation in Data Warehouses|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/aggreg.htm#i1007462]
My understanding is that the OLAP option would be used in cases where you have significant quantities of data, or the answers to the questions are not known at time of coding (hence negates the ability of materialized views).
However in my experiences thus far all the capabilities identified in [SQL for Aggregation in Data Warehouses|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/aggreg.htm#i1007462] has been able to answer all the questions I have needed.

Similar Messages

  • What's the Difference Between OLAP and OLTP?

    HI,
    What's the difference between OLAP and OLTP ? and which one is Best?
    -Arun.M.D

    Hi,
       The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
      Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.

  • What is the olap in bw ?

    hi experts,
        today i received an architect document which discribe the architect of the DW system .
       the architect shows below:
    some data sources -
    > ETL -
    >DW (SAP BW )   -
    > ETL -
    >OLAP -
    >some report display tools
        what makes me confussed is what the OLAP component is ? now i have create some cubes  and what should i do to implements the OLAP component ?
        i think the olap is just the drill down , filter function. can someone tell me what the olap is ? your help is appreciated and assign points if it helps!

    Hi,
    SAP Business Information Warehouse uses OLAP technology for the analysis of data held in the data warehouse. Online Analytical Processing (OLAP) characterizes BW as a Decision Support System. OLAP allows decision-makers to quickly and interactively analyze the multi-dimensionally modeled data appropriate to business Considerations.
    InfoProviders allow this data to be viewed.  As data is stored in InfoCubes optimized for the reading of data, InfoCubes and MultiProviders for InfoCubes should be the preferred InfoProvider.
    Check out this link for "OLAP Functions and Services "
    http://help.sap.com/saphelp_nw04/helpdata/en/7c/c3e60666cd9147bb6242dc6500cd77/frameset.htm
    Assign points if found useful,
    Regards,
    Mani.

  • OLAP runtime very high - urgent

    Hello Experts,
    I am execting a query containing almost 30 RKF by hierarchy nodes and 30 CKF on those RKF.
    Also, I am using Alt UOM in the KF conversions.
    My ST03 analysis indicates that %OLAP runtime is approx 60% of total runtime.
    I request you to suggest me few tips, which can considerably reduce OLAP runtime w/o removing RKF and CKF from query.
    Should I use OLAP cache or not. What all settings I should apply in Query monitor (RSRT).
    How do I test performance improvement.
    What are the steps to set precalculation of reports, however this is the last option.
    I read one of the blog from Prakash, he has suggested to inactivate OLAP cache when RKF and CKF are there in the query. Is it ok.
    Thanks,
    Swati

    I would go with OLACP cache as per your requirement
    OLAP Cache should be used whenever possible. However, for queries that are executed only once (or very rarely) or for very small queries (e.g., DB time < 0.10 sec), the cache overhead could lead to slightly worse performance.
    Check if the Shared Memory size (rdsb/esm/buffersize_kb) is bigger than the OLAP Cache size. By default, the Shared Memory size is very small. SAP recommend an initial size of 100MB.
    For more information on the OLAP Cache, see SAPnote 456068 .
    If all frontend users are authorized to create queries, note that the OLAP Cache for a particular query gets invalidated each time the query is changed and re-activated and it might be useless eventually.
    Queries containing virtual characteristics/key figures do not use the OLAP Cache by default. As the OLAP Cache cannot control other database tables, which might be read within the customer exits, and hence cannot invalidate, if there are any changes in those tables, this default setting guarantees data consistency. However, you can enforce explicitly the usage of the OLAP Cache, if you are sure that there canu2019t be a difference between the cached data and the original data using the virtual characteristics/key figures.
    If you use virtual characteristics/key figures and want to use the OLAP cache, first make sure that the data is ALWAYS consistent, i.e. that the customer exit is not using frequently updated tables. Keep in mind that the Cache is not invalidated if the data in these tables changes! You can then enforce using the Cache as described in SAPnote 623768 (Extended Cache Functionality).
    You can use OLAP trace (RSRTRACE) to warm up the OLAP cache.
    Hope it Helps
    Chetan
    @CP..

  • Warming up the OLAP cache

    Hi,
    I would like to schedule execution of some queries in order to put the results in the OLAP cache for fast use.
    Each user run the query with very restricted selection: one period, one node in the CostCenter hierarchy and one currency type! All the selections are obligatory and for single values.
    I created a "super" query with the same characteristics, all in the rows or columns, with not mandatory variables, ...
    I scheduled the "super" query and it create an entry in the OLAP cache.
    When I now run the production query (other query on the same cube, for one node, one period, one currency) the system doesn't use the OLAP cache, but create a new line in the OLAP cache.
    It is impossible to schedule every combination.
    Is there a way to worm up the cache with a different Q?
    Should the seedling be done with Broadcaster or Reporting Agent (in NW2004s)? I think this should be the same ?!
    Any suggestion to fill the cache for my situation?
    Thanks, Tom

    The OLAP cache is at a query level, so to warm up the cache, you must run the actual query that the users will run, not just a similar or "super" query that is similar.
    SDN has some doc on effectively using Global cache which would be good to review.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9f4a452b-0301-0010-8ca6-ef25a095834a
    The other key thing to understand is how the setting on variable "Can be changed during query navigation" works.  If this setting is NOT set on your variables, then in order for a query to access the data in the cache, the query must have been run previously with the exact same variable values, as the varaiable values get saved as part of the cached info.
    Selecting the "Can be changed during navigation" variable setting changes the way the system considers the variable - it behaves as a filter.  This has some implications with way prompts for variables appear to the user, so there are user impacts.  In BEx, a variable is normally presented for input when you first run the query and then whenver you refresh it, but when you change the variable setting to Can be changed..., the variable prompt will be presented teh frist time the query is run, but NOT when a refresh is done, it now behaces as if you added a filter to the query.
    So if you change all the variables (or create new ones) used bythe query to Can be changed..., then run your query thru reporting agent or info broadcasting wide open, or with restrictions that encompass all your user's query executions, the subsequent user executions will use the global Olap cache.

  • Will Oracle OLAP handle our case(s).

    We are about to build a cube to roughly handle following dimensions and facts:
    15 dimensions ranging from a couple of members to 40,000+ members.
    A fact table holding 200,000,000+ rows
    So my question is: Does anybody has a sense of whether OLAP has a chance of handling this data? We are pretty certain that the data will be sparse.
    A second item relates to whether Oracle OLAP cubes give us the ability to compute what we refer to as "Industry" data. We serve a number of companies and we compute metrics that applies to their data. In order to allow these companies to see what they are doing against the other companies we provide the metrics for every other company; this metrics are considered Industry. So my question is: Do OLAP cubes have any structure or mechanism that allows to compute these metrics within the same cube or do we have to create a cube to hold Industry metrics?
    Thanks,
    Thomas

    Thomas,
    I cannot advise you for or against based on the small amount of information I have. I will not deny that at 15 dimensions you are at the upper limit of what you can achieve using the current (11.1) OLAP technology, but I have seen cubes of this size built and queried, so I know it is possible.
    The answer would depend on many things: hardware, query tools, expectations for query and build performance, and whether you have a viable alternative technology (which will determine how hard you will work to get past problems). It even depends on your project time frames, since release 11.2 is currently in beta and will, we hope, handle greater volumes of data than 11.1 or 10g.
    One important factor is how you partition your cube. At what level do you load the data (e.g. DAY or WEEK)? What is your partition level (e.g. MONTH or QUARTER)? A partition that loads, say, 10 million rows, is going to be much easier to build than a partition with 50 million rows. To decide this you need to know where your users will typically issue queries, since queries that cross partitions (e.g. ask for data at the YEAR level but are partitioned by WEEK) are significantly slower than those that do not cross partitions (e.g. ask for data at WEEK when you are partitioned by MONTH).
    Cube-based MVs can offer advantages for cubes of this size even if you define a single, 15-dimensional, cube. One nice trick is to only aggregate the cube up to the partition level. Suppose, for example, that you load data at the DAY level and partition by QUARTER. Then you would make QUARTER be the top level in you dimension instead of YEAR or ALL_YEARS. The trick is to make YEAR be an ordinary attribute of QUARTER so that it appears in the GROUP BY clause of the cube MV. Queries that ask for YEAR will still rewrite against the cube, but the QUARTERS will be summed up to YEAR using standard SQL. The result will generally be faster than asking the cube to do the same calculation. This technique allows you to lower your partition level (so that there are fewer rows per partition) without sacrificing on query performance.
    Cube-based MVs also allow you to raise the lowest level of any dimension (to PRODUCT_TYPE instead of PRODUCT_SKU say). Queries that go against the upper levels (PRODUCT_TYPE and above) will rewrite against the cube, and those that go against the detail level (PRODUCT_SKU) will go direct to the base tables. This compromise is worth making if your users only rarely query against the detail level and are willing to accept slower response times when they do. It can reduce the size of the cube considerably and is definitely worth considering.
    David

  • OLAP differences between 10g and 9i

    Hi, is there any documentation on new / changed features in OLAP 10g vs. OLAP 9i. I'm currently installing all of the 10g products (now that the new BI Beans for 10g is out), but couldn't find a "whats new" on the OLAP side.
    Thanks!
    Scott

    There use to be an OLAP 10g Data Sheet up on OTN that provided the new features. I'll ping them and ask them to re-post it and the other missing OLAP 10g documentation and white papers.
    I pasted it below:
    Partitioned Variables
    The multidimensional engine provides direct support for partitioned variables. This support for partitioning presents many opportunities for both enhancing manageability and supporting large multidimensional data sets.
    Three partitioning methods are supported:
    • Range partitioning allows data to be partitioned based on a range of dimension members. For example, one partition might contain time dimension members that are less than '13', another that are less than '25', and so on.
    • List partitioning allows data to be partitioned based on a list of specific dimension members. For example, a partition might contain dimension members <'JAN02','FEB02','MAR02'> and other partition might contain members <'JAN03','FEB03','MAR03'>.
    • CONCAT partitioning partitions data according to the dimension members that belong to a CONCAT dimension.
    With each partitioning method, the multidimensional engine creates separate variables to store data. To the application, it appears that all data is stored in a single variable.
    Scalability is enhanced in a number of different ways:
    • Data can be partitioned across time, thus providing the ability to store more historical data in the analytic workspace without affecting performance or manageability.
    • Calculations can be easily limited to a subset of dimension members, or they can be parallelized. For example, aggregations, allocations and other calculations can be performed on time periods within a particular partition.
    • Data loading can be parallelized.
    • When partitioned by the logical model, for example, by level of summarization, the definition of the variable can be adjusted to account for changes in sparsity between detail data and summary data.
    • Disaster recovery tasks can be performed on subsets of data and can be parallelized.
    • Partitioned variables can be partitioned across different data files and disks to minimize I/O bottlenecks.
    Enhanced Storage Model
    The storage model is enhanced to support the placement of objects in the analytic workspaces into specific rows of the AW$ table. Objects can be further partitioned by segment size to allow for large objects. The AW$ table can then be partitioned across multiple data files.
    The obvious benefit of the enhanced storage model is that database administrators have complete control over how data is distributed across data files and can therefore optimize I/O for data and data access patterns.
    Multi-Writer Mode
    The multidimensional engine supports a multi-writer attachment mode, which allows an analytic workspace to be modified simultaneously by several sessions. In multi-writer mode, users can simultaneously modify the same analytic workspace in a controlled manner by specifying the attachment mode (read-only or read-write) for individual variables, relations, valuesets and dimensions.
    The MULTI attach mode provides the opportunity to parallelize any number of activities in the analytic workspace. Some examples follow:
    • Using separate simultaneous sessions to load data into different variables can parallelize data loading tasks. For example, different sessions could be used to load data into SALES and COST variables. When combined with partitioned variables, different sessions could load into each partition in parallel.
    • Separate sessions can be used to aggregate separate variables or partitions of a variable.
    • Separate sessions can be used to solve models, allocations and virtually any other calculation within the analytic workspace as long as the calculation is directed to different variables or partitions of a variable.
    Parallel Update
    The OLAP DML UPDATE command runs automatically in parallel on partitioned variables, thus optimizing performance of this command on servers with multiple processors. Significant improvements will be seen in cases where large volumes of data are updated (such as a data load or aggregation) and partitioned variables are used.
    Aggregation from Formulas
    Oracle OLAP 10g allows formulas to be used as a source of data to the AGGREGATE command. This eliminates the need to calculate and store data at the detail level, yet still retains the ability to aggregate to summary levels. The benefit is that the multidimensional engine presents large volumes of derived information from relatively little stored data.
    Optimizations to Composite Dimension Indexing
    New 64-bit B-Tree+ indexes and optimizations to the process of synchronizing composite dimensions to base dimensions support excellent query response times with very large composite dimensions (for example, composite dimensions in excess of 1 billion members).
    Certified with Real Application Clusters and Grid Computing Real Application Clusters and Oracle Grid Computing provide a database platform of virtually limitless computing capacity and scalability. The multidimensional engine and data types of the OLAP option, being part of the Oracle Database, have been tested with Real Application Clusters and Oracle Grid Computing. This provides Oracle OLAP the capability to support very large user communities and data sets.
    Wider Relational Filters to Multidimensional Data Types
    The OLAP 10g optimizes a wider range of SQL predicates when selecting from multidimensional data types. This is accomplished by applying SQL filters before the data is converted to a row set using OLAP_TABLE. As a result, the risk of pushing large volumes of data through OLAP_TABLE is minimized and applications need not be as concerned with optimizing SQL for selecting from OLAP_TABLE. The net result is that a wider variety of SQL applications can be used with the OLAP option without special considerations.
    Support of SQL Model Clause
    Oracle Database 10g introduces OLAP-like calculations that are expressed with a SQL MODEL clause, which is similar to what the OLAP community commonly refers to as custom dimension members. A custom dimension member is a virtual member whose value is calculated at runtime.
    The SQL MODEL clause provides an additional method for defining certain types of calculations against multidimensional data types, and the SQL interface to multidimensional data types has been optimized for SQL models. Optimization occurs by having the multidimensional engine completely bypass OLAP_TABLE as data is being returned.
    As a result, the processing of SQL with the MODEL clause is highly efficient against multidimensional data types. In many cases, performance of MODEL with multidimensional data types exceeds that of the same SQL against relational tables. This provides SQL based applications with both new analytic features and performance advantages.
    Query Rewrite to Views over Multidimensional Data Types
    In Oracle Database 10g a new feature, query equivalence, allows query rewrite to be used with views. With query equivalence, the DBA indicates to the database what SQL could have been used to create the view even if the view was created in some other way. For example, if the application likes to emit SQL with SUM … GROUP BY but the view was created with entirely different SQL, the DBA could indicate that the view is equivalent to SUM … GROUP BY.
    This feature of the database is extremely useful with the OLAP option since SQL access is always through views. This provides the DBA and application with benefits similar to those of materialized views – simplified maintenance and improved query performance.
    Automatic Runtime Generation of Abstract Data Types
    Abstract data types are used by object technology of the Oracle Database to define the relational columns for data that is returned from a non-relational data source. In the case of the OLAP option, abstract data types describe data being selected from analytic workspaces in terms of relational columns.
    Previously it was a requirement that abstract data types be created as part of the administrative process of enabling analytic workspaces for query by SQL. To provide applications and database administrators with additional flexibility in the administration of SQL access to analytic workspaces, Oracle OLAP 10g supports automatic runtime generation of abstract data types as part of the query process.
    With the addition of this new feature, it is now possible to query analytic workspaces without requiring the DBA to predefine either abstract data types or views.

  • Discoverer report from OLAP

    Hi,
    We are trying to run a doscoverer OLAP PLUS report therough a URL. The help says that we need to use the workbook ID, but in the workbook property we cannot find identifier field. The URL we are forming is as shown. Please help
    http://itlinuxdevblade18.hq.emirates.com:7778/discoverer/plus?%20%20%20&us=S243191%20%20&pw=S243191%20%20&db=itlinuxdevblade18:1521:cvu10gd%20&lm=OLAP&wb=root\Users\S243191\A1

    As regards the Forms 6i.....
    Re: Calling a Disco Report from Forms
    Greetings....
    Sim

  • Creation of Universe based on Hana OLAP connection

    I have to create a universe in IDT 4.1 sp2 based on SAP HANA OLAP
    connection ?
    How can i access SAP HANA view which was created using OLAP connection in Business objects Explorer? It does not allow creating Universe on top of SAP HANA OLAP connection and gives an error
    SAP Business Objects query and reporting applications can directly connect to OLAP SAP HANA connections. No universe is required, only a published OLAP SAP HANA connection.
    Thanks

    Hi Jyothy,
    Refer the below link on how to create universe on Analytic View/Calculation View using relation connection in IDT
    http://www.sapanalyticsguru.com/index.php/sap-bobi/31-universe-creation-on-hana-view-using-information-design-tool

  • URGENT : Help needed for OLAP tools selection

    Hi Gurus,
    Environment :
    Oracle 10g
    .net web application with backend as Oracle 10g server embaded with OLAP server.
    Application - CRM
    Feature requreid - OLAP, OLAP Reporting, Drilldown graphical analysis, US MAP view location specific queries, Ad-hoc analysis.
    I have to take a descision for for the OLAP tool selection with above mention client requirement.
    * Please let me know if anybody have worked on above combination for the required features and held up with any issue (if any) ?
    * Please go thru Oracle options below per feature and let me know is there any issue with .NET and IIS server,Oracle 10g OLAP compatibily ? specially for following points from the below details -
    4. OLAP Reporting Tool :
    6. Web deployment of Cubes :
    8. Graphing :
    9. Geographical Map View based query :
    1.Extraction Transformation and loading (ETL) Tool : Oracle Business Intelligence Warehouse Builder, tightly integrated with Database Server.
    2. Analysis Tool (Multidimensional Data Analysis using cubes) : Oracle Business Intelligence Discoverer·     
    3. Ad-hoc query and analysis by end User :
    Oracle Business Intelligence Spreadsheet Add-in.OLAP DML Regular Sql with OLAP_TABLE Function.
    4. OLAP Reporting Tool :     Oracle Reports 10g.
    5. Multidimensional Data (Cube) Query Language     OLAP DML : Regular Sql with OLAP_TABLE Function
    6.Web deployment of Cubes : Available
    7. Cube data export to xls, html, format : Available
    8.Graphing : Oracle Reports 10g Graph Wizard for Graphical Analysis.
    9. Geographical Map View based query : Oracle Locator: Location-Enabling Every Oracle Database
    Please send me the relevent urls in support / issues for the above featues.
    TIA,
    Sheilesh

    The only Adobe program I know that can edit images is Photoshop.
    If you have troubles with Google software, you need to post in the appropriate Google forum.

  • Refresh of Webi in BI Launch Pad : "Failed to connect to the olap source"

    Dear all,
    I have a Web Intelligence document based on a BEx Query (BICS connectivity via an OLAP Connection configured with SSO)
    Behaviour :
    in Web Intelligence Rich Client (2 tier and 3 tier) : refresh is OK
    in BI Launch Pad : refresh is not OK :
    What can be the reason ? In my opinion SSO to BW is working fine because :
    refresh in Web Intelligence Rich Client is OK
    refresh of an Analysis Application (Design Studio) based on the same OLAP connection is working fine in Design Studio and in BI Launch Pad
    Do I miss something at server level ?
    In the log file of the APS running the DSLBridge Service, I found :
    |BFCA5F821928450BBBE73F2614F437D5134|2013 03 07 16:59:43.174|+0100|Error|Error|>>|E| |aps_SIA_I_DEV.APS_DSL| 3712|796189|Transport:Shared-8074/34| |1542|0|2|0|Webi SDK.CorbaServerImpl.doProcess()|IDIRBLOCKAP010V:2960:207.7025:1|webiserver_SIA_I_DEV.WebIntelligenceProcessingServer.openDocumentMDP|localhost:12164:11488.145650:1|.doIt|IDIRBLOCKAP010V:3712:796189.292728:1|Ci8H06XT2kMZvL7KIxC6cuQ1b23|||||||||||Exception caught in SL Service: Cannot connect to the olap source
    com.businessobjects.mds.services.helpers.OlapUniverseHelperException$ConnectionToOlapSourceFailedException: Cannot connect to the olap source
    at com.sap.sl.edp.dataprovider.olap.AbstractOlapDataProvider.createOlapClient(AbstractOlapDataProvider.java:490)
    at com.sap.sl.edp.dataprovider.olap.DirectOlapAccessDataProvider.generateProtoUniverse(DirectOlapAccessDataProvider.java:216)
    at com.sap.sl.edp.dataprovider.olap.DirectOlapAccessDataProviderBuilder.generateProtoUniverse(DirectOlapAccessDataProviderBuilder.java:91)
    at com.businessobjects.dsl.services.universe.impl.UniverseServiceImpl.provideProtoUniverse(UniverseServiceImpl.java:291)
    at com.businessobjects.dsl.services.universe.impl.UniverseServiceImpl.getProtoUniverse(UniverseServiceImpl.java:181)
    at com.businessobjects.dsl.services.datasource.impl.DataSourceServiceImpl.getDataSourceHeader(DataSourceServiceImpl.java:182)
    at com.businessobjects.dsl.services.datasource.impl.DataSourceServiceImpl.getDataSourceHeader(DataSourceServiceImpl.java:130)
    at com.sap.sl.proxyconsumption.services.datasourceservice.DataSourceServiceImpl.getDataSourceHeader(DataSourceServiceImpl.java:516)
    at com.sap.sl.proxyconsumption.services.datasourceservice.DataSourceServiceImpl.getDataSourceHeader(DataSourceServiceImpl.java:480)
    at com.sap.sl.proxyconsumption.protobuf.rpc.DatasourceRpc$dataSource.callMethod(DatasourceRpc.java:207)
    at com.sap.sl.proxyconsumption.services.server.DSLBridge.callService(DSLBridge.java:236)
    at com.sap.sl.proxyconsumption.services.server.DSLBridge.doIt(DSLBridge.java:161)
    at com.businessobjects.cdz_ext.slproxybridge.corba.ServerServant.doIt(ServerServant.java:119)
    at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.businessobjects.framework.servers.platform.adapters.ebus.orb.CommonTransportInterceptor.invokeHelper(CommonTransportInterceptor.java:118)
    at com.businessobjects.framework.servers.platform.adapters.ebus.orb.CommonTransportInterceptor.invoke(CommonTransportInterceptor.java:87)
    at com.businessobjects.framework.servers.common.proxy.cglib.MethodInterceptorChain.intercept(MethodInterceptorChain.java:136)
    at com.crystaldecisions.enterprise.ocaframework.idl.OCA.OCAcdz.slproxybridge.serverPOA$$EnhancerByCGLIB$$66b9c148.doIt(<generated>)
    at com.crystaldecisions.enterprise.ocaframework.idl.OCA.OCAcdz.slproxybridge.serverPOA._OB_op_doIt(serverPOA.java:107)
    at com.crystaldecisions.enterprise.ocaframework.idl.OCA.OCAcdz.slproxybridge.serverPOA._invoke(serverPOA.java:83)
    at com.crystaldecisions.thirdparty.com.ooc.OBPortableServer.ServantDispatcher.dispatch(ServantDispatcher.java:234)
    at com.crystaldecisions.thirdparty.com.ooc.OBPortableServer.POA_impl._do_OB_dispatch(POA_impl.java:1977)
    at com.crystaldecisions.thirdparty.com.ooc.OBPortableServer.POA_impl._OB_dispatch(POA_impl.java:1913)
    at com.crystaldecisions.thirdparty.com.ooc.OB.DispatchRequest_impl.invoke(DispatchRequest_impl.java:75)
    at com.businessobjects.framework.servers.platform.adapters.ebus.orb.ThreadPoolDispatchStrategy$Dispatcher.run(ThreadPoolDispatchStrategy.java:271)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:743)
    Caused by: com.businessobjects.mds.olap.OlapException: [Internal] SSO token or User password is empty.
    at com.businessobjects.mds.olap.protocol.sap.jco.JCOConnection.connect(JCOConnection.java:259)
    at com.businessobjects.mds.olap.protocol.bics.internal.BICSClientImpl.initialize(BICSClientImpl.java:407)
    at com.businessobjects.mds.olap.OlapClientFactory.Create(OlapClientFactory.java:93)
    at com.businessobjects.mds.services.helpers.OlapClientHelper.createOlapClient(OlapClientHelper.java:409)
    at com.businessobjects.mds.services.helpers.OlapClientHelper.createNamedOlapClient(OlapClientHelper.java:361)
    at com.businessobjects.mds.services.helpers.OlapClientHelper.createNamedOlapClient(OlapClientHelper.java:234)
    at com.sap.sl.edp.dataprovider.olap.AbstractOlapDataProvider.createOlapClient(AbstractOlapDataProvider.java:487)
    ... 32 more
    Many thanks for your help
    Hans

    Many thanks for the update, but a little bit troubling.
    On which SPs/Patch level is your SAP BI BusObjects System as well as SAP BW running?

  • PXI 2527 & PXI 4071 -Questions about EMF considerations for high accuracy measurements and EMF calibration schemes?

    Hi!
    I need to perform an in-depth analysis of the overall system accuracy for a proposed system. I'm well underway using the extensive documentation in the start-menu National Instruments\NI-DMM\ and ..\NI-Switch\ Documenation folders...
    While typing the question, I think I partially answered myself while cross-referencing NI documents... However a couple of questions remain:
    If I connect a DMM to a 2 by X arranged switch/mux, each DMM probe will see twice the listed internal "Differential thermal EMF" at a typical value of 2.5uV and a max value of less than 12uV (per relay). So the total effect on the DMM uncertainty caused by the switch EMF would be 2*2.5uV = 5uV? Or should these be added as RSS: = sqrt(2.5^2+2.5^2) since you can not know if the two relays have the same emf?
    Is there anything that can be done to characterize or account for this EMF (software cal, etc?)?
    For example, assuming the following:
    * Instruments and standards are powered on for several hours to allow thermal stability inside of the rack and enclosures
    * temperature in room outside of rack is constant
    Is there a reliable way of measureing/zeroing the effect of system emf? Could this be done by applying a high quality, low emf short at the point where the DUT would normally be located, followed by a series of long-aperture voltage average measurements at the lowest DMM range, where the end result (say (+)8.9....uV) could be taken as a system calibration constant accurate to the spec's of the DMM?
    What would the accuracy of the 4071 DMM be, can I calculate it as follows, using 8.9uV +-700.16nV using 90 days and 8.9uV +- 700.16nV + 150nV due to "Additional noise error" assuming integration time of 1 (aperture) for ease of reading the chart, and a multiplier of 15 for the 100mV range. (Is this equivalent to averaging a reading of 1 aperture 100 times?)
    So, given the above assumptions, would it be correct to say that I could characterize the system EMF to within  8.5uV+- [700.16nV (DMM cal data) + 0.025ppm*15 (RMS noise, assuming aperture time of 100*100ms = 10s)] = +-[700.16nV+37.5nV] = +- 737.66nV? Or should the ppm accuracy uncertainties be RSS as such: 8.5uV +- sqrt[700.16nV^2 + 37.5nV^2] = 8.5uV +-701.16nV??
     As evident by my above line of thought, I am not at all sure how to properly sum the uncertainties (I think you always do RSS for uncertainties from different sources?) and more importantly, how to read and use the graph/table in the NI 4071 Specifications.pdf on page 3. What exactly does it entail to have an integration time larger than 1? Should I adjust the aperture time or would it be more accurate to just leave aperture at default (100ms for current range) and just average multiple readings, say average 10 to get a 10x aperture equivalent?
    The below text includes what was going to be the post until I think I answered myself. I left it in as it is relevant to the problem above and includes what I hope to be correct statements. If you are tired of reading now, just stop, if you are bored, feel free to comment on the below section as well.
    The problem I have is one of fully understanding part of this documenation. In particular, since a relay consists of (at least) 2 dissimilar metal junctions (as mentioned in the NI Switch help\Fundamentals\General Switching Considerations\Thermal EMF and Offset Voltage section) and because of the thermo-couple effect (Seebeck voltage), it seems that there would be an offset voltage generated inside each of the relays at the point of the junction. It refeers the "Thermocouple Measurements" section (in the same help document) for further details, but this is where my confusion starts to creep up.
    In equation (1) it gives the expression for determining E_EMF which for my application is what I care about, I think (see below for details on my application).
    What confuses me is this: If my goal is to, as accurately as possible, determine the overall uncertainty in a system consisting of a DMM and a Switch module, do I use the "Differential thermal EMF" as found in the switch data-sheet, or do I need to try and estimate temperatures in the switch and use the equation?
    *MY answer to my own question:
    By carefully re-reading the example in the thermocouple section of the switch, I realized that they calculate 2 EMF's, one for the internal switch, calculated as 2.5uV (given in the spec sheet of the switch as the typical value) and one for the actual thermocouple. I say actual, because I think my initial confusion stems from the fact that the documenation talks about the relay/switch junctions as thermocouples in one section, and then talks about an external "probe" thermocouple in the next and I got them confused.
    As such, if I can ensure low temperatures inside the switch at the location of the junctions (by adequate ventilation and powering down latching relays), I should be able to use 2.5uV as my EMF from the switch module, or to be conservative, <12uV max (from data sheet of 2527 again).
    I guess now I have a hard time believeing the 2.5uV typical value listed.. They say the junctions in the relays are typically an iron-nickel alloy against a copper-alloy. Well, those combinations are not explicitly listed in the documenation table for Seebeck coefficients, but even a very small value, like 0.3uV/C adds up to 7.5uV at 25degC. I'm thinking maybe the table values in the NI documentation reffers to the Seebeck values at 25C?
    Project Engineer
    LabVIEW 2009
    Run LabVIEW on WinXP and Vista system.
    Used LabVIEW since May 2005
    Certifications: CLD and CPI certified
    Currently employed.

    Seebeck EMV needs temperature gradients , in your relays you hopefully have low temperature gradients ... however in a switching contact you can have all kind diffusions and 'funny' effects, keeping them on same temperature is the best you can do. 
    Since you work with a multiplexer and with TCs, you need a good Cold junction ( for serious calibrations at 0°C ) and there is the good place for your short cut to measure the zero EMV. Another good test is loop the 'hot junction' back to the cold junction and observe the residual EMV.  Touching (or heating/cooling) the TC loop gives another number for the uncertainty calculation: the inhomogeneous material of the TC itself..
    A good source for TC knowledge:
    Manual on the use of thermocouples in temperature measurement,
    ASTM PCN: 28-012093-40,
    ISBN 0-8031-1466-4 
    (Page1): 'Regardless
    of how many facts are presented herein and regardless of the percentage
    retained,
                    all will be for naught unless one simple important fact is
    kept firmly in mind.
                    The thermocouple reports only what it "feels."
    This may or may not the temperature of interest'
    Message Edited by Henrik Volkers on 04-27-2009 09:36 AM
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • SSO to SAP works but no OLAP Connection per SSO Auth

    Hi experts,
    we have setup an SSO for the Authentication of SAP BW and SAP BO and used the portal integration. We are using SAP BO 4.1 SP4 and SAP BW 7.4.
    We use the Login via Netweaver Portal go then to the SAP BO where the reports are stored.
    The SSO login works fine, but the OLAP connection to the SAP BW system does not fly. I have tried to create a connection via IDT. This works.
    After that I created a WebI report in the Applet and chose BEx Connection and retreived the error:
    error.openSapBwBrowsingSessionFailed
    Then i tried the WebI Rhich Client and recieved the message: Unknown Error in SL Service and Even do not recieve the list of possible Bex connections.
    We are using SNC for the user authentication in SAP BW.
    An now it is getting very unnormal:
    When i go the IDT tool and create the connection again and republish this to the repository and try to connect again via WebI Applet, i do not get the error message again.
    Can you please assist, as our Business user can not publish their OLAP connection.
    Regards,
    Markus

    The new Business Objects version (BI 4.0) comes with a new authentication
    technology to create a trust relationship between a non-SAP user and the SAP
    data source. How to determine the correct method to be used?
    When using legacy .unv universes (XI 3.1 technology) = SNC
    When using .unx environments (BI 4.0 new semantic layer) = STS
    when you try to connet BICS connection or IDT it is important to use the STS methodology.
    check the below link to have configurations.
    Follows a Wiki link with a "How to setup SSO against SAP  BW in SBO BI4.0 for LDAP users".  and follow the raunak kumar suggestion when you configire SNC and STS.
    http://wiki.sdn.sap.com/wiki/display/BOBJ/How+to+setup+SSO+against+SAP+BW+in+SBO+BI4.0+for+LDAP+users

  • How to create olap cube using Named Query Table in Data source View

     I Create on OLAP Cube using Existing Tables Its Working Fine But When i Use Named Query Table with RelationShip To other Named query Table  It Not Working .So give me some deep Clarification On Olap Cube for Better Understanding
    Thanks

    Hi Pawan,
    What do you mean "It Not Working"? As Kamath said, please post the detail error message, so that we can make further analysis.
    In the Data Source View of a CUBE, we can define a named query. In a named query, you can specify an SQL expression to select rows and columns returned from one or more tables in one or more data sources. A named query is like any other table in a data source
    view (DSV) with rows and relationships, except that the named query is based on an expression.
    Reference:Define Named Queries in a Data Source View (Analysis Services)
    Regards,
    Charlie Liao
    TechNet Community Support

  • No data fields are available in the OLAP cube

    i was able to access data in Excel yesterday, but now all the cubes that i have created cannot be accessed by Excel any more and after authenticating and selecting the cell for the pivot table i get this error:
    No data fields are available in the OLAP cube
    i just hope this is some process in HANA that needs restarting and not some endemic MDX problem. if so, where do i look?
    has anyone seen this?
    BTW, shouldn't there be a single log on for the connection? i have to enter credentials twice: 1 when creating the connection and 2 when opening the pivot table.

    something/someone has locked my userid when i was recreating the steps in Excel (i don't think i have tried 'wrong' passwords, but i did try a ['wrong' user|http://misiorek.com/h/GMSnap206%202011-11-27.jpg]
    Here are the steps:
    1. Open [Excel|http://misiorek.com/h/GMSnap202%202011-11-27.jpg].
    2. Enter [connections|http://misiorek.com/h/GMSnap203%202011-11-27.jpg].
    3. Select [HANA cube|http://misiorek.com/h/GMSnap204%202011-11-27.jpg].
    4. Select [Pivot cell|http://misiorek.com/h/GMSnap205%202011-11-27.jpg].
    5. Enter credentials (see above).
    I'm also not sure why I see these security messages:
    [locked|http://misiorek.com/h/GMSnap207%202011-11-27.jpg], Microsoft [warning|http://misiorek.com/h/GMSnap208%202011-11-27.jpg], HANA [warning|http://misiorek.com/h/GMSnap209%202011-11-27.jpg], HANA db [warning|http://misiorek.com/h/GMSnap210%202011-11-27.jpg]

Maybe you are looking for

  • Different format in one cell by formula?

    Hi there, does someone know if it is possible to grab and connect text with the & operator and set different formats (bold, italic...) with a command in the formula? For example I have different lines in one cell done with the CHAR(10) and want to ha

  • Heavy row lock contention

    Guys, I really appreciate your views on this.. Please can some one who have worked on RAC and have an understanding how RAC works, guide me. We currently are running a loadtest on one of the new RAC system and we are seeing excessive row lock content

  • Can't set authorizations

    Hi folks, We're facing serious problems when trying to set authorizations to users. We can't set any authorization. Everytime we open the General Authorizations form, we change all the authorizations we want. When we press Update, nothing happens. Th

  • Alternative for integration process ?

    The XI integration process functionality for me looks like another example of overkill design in XI. Needs lots of additional abstract message interface declarations, difficult to follow what really happens. Question: what alternatives in XI are poss

  • My "What's New" procedure doesn't show the result Search result page

    Hi, I needed a "What's New" report for a content area. The advanced search screen has the option to query with the create date, but you can save the search with something like #sysdate-7. So I created the following procedure: create or replace proced