OLAP vs OBIEE Cubes vs Data Warehouse Cubes

Good afternoon,
Could someone please tell me the difference between the cubes created using Oracle's Analytic Workspace Manager, the cubes created in OBIEE, and the ones created in Oracle Warehouse Builder? Do these all use Oracle's OLAP functionality, or are they different things?
Thank you.

I think by Datawarehouse cubes, you mean the traditional relational ROLAP cubes.
It has been a long time now that OLAP engine was merged into Oracle database (starting with 9.2 version), so multidimensional MOLAP cubes can be created inside an Oracle-based DW as part of Aggregation strategy (in addition to MVs or instead of MVs) to improve query performance and simplify calculations.
Some other points.
(1). OWB can create both ROLAP cubes and MOLAP cubes. Even ODI has knowledge modules to create MOLAP (i.e., Oracle-OLAP) cubes.
(2). OBIEE cannot create Oracle-OLAP cubes in the database. I think there is some new functionality to create Essbase cubes through OBIEE, but there is no out-of-the-box functionality to create Oracle-OLAP cubes through OBIEE.
(3). The process to create Oracle-OLAP cubes using AWM and then import those into RPD is very simple. Starting with OBIEE 11.1.1.5, it understands oracle olap metadata in standard Oracle database dictionary. So when Oracle-OLAP cubes and dimensions are queried, obiee generates physical queries using OLAP_TABLE fuction, and that is how data is retrieved from OLAP engine into relational engine and then into BI server.
(4). Oracle OLAP cubes are always created in Analytical workspace, which is a table prefixed by AW$. So one quick way is to check the tables in your schema and see if there is any table with AW$ prefix.
You can also query Oracle-OLAP metadata to see what (if any) Aanalytical workspaces, MOLAP cubes and MOLAP dimensions exist in your database. Refer to Oracle OLAP Dictionary Views at http://docs.oracle.com/cd/E11882_01/olap.112/e17123/admin.htm#i1006325

Similar Messages

  • Data Warehouse Cubes Not Processing

    With a customer now who is having data warehouse problems, the main issues being:
    - ETL jobs run fine (except MPSync which finishes with only 179/180 jobs complete)
    - Cube processes are stuck in a RUNNING loop, they never complete or fail out and all show a last run time 1/10 and next run time 1/11
    Have scoured the internet to find any solution to this, have come across various blogs to fix the issue. Have tried to manually disable the jobs and then manually process the cubes, restart the DW server, restart the SSAS, etc. to no avail.
    Lastest solution we tried to take was to unregister the DW and re-register, however when we went to unregister the DW we received the following error:
    "Failed to unregister from the data warehouse"
    Title: Error
    Text: System.ArgumentException: SM data source not found.
    at
    Microsoft.EnterpriseManagement.ServiceManager.UI.Administration.DWRegistration.Unregistration.DWUnregistrationHelper.AcceptChanges(WizardMode wizardMode)
    So our next step was to unregister the data sources and then re-register them individually. Of the two data sources, we were able to unregister and re-register the DW data source (DW_COMPANY01), but when we tried to unregister the Operational data source
    (COMPANY01) we got the following error:
    Title:  An error was encountered while running the task.
    Text:  Exception has been thrown by the target of an invocation.
    Based on the two errors shown above I assume we cannot un-register this data source or the DW as a whole because it cannot find that operational data source. Couple things to point out above this data source and the environment to shed some light on the
    situation:
    Customer upgraded all servers to UR4 around the January time frame
    Customer promoted a new primary MS around January 20<sup>th</sup>
    Currently, when reviewing the details of the data source, the SDK Server references the computer name of the OLD primary management server
    Looking at the event logs on the DW MS (with SSRS/SSAS) an error shows that the MPSynch job failed to finish and reference the OLD primary management server
    Looking for any guidance for this issue. Have taken many steps to troubleshoot the problem to avail. Please let me know if you have any questions or need any more information.

    Hi !
    Probably it was a problem with the primary Keys, because when MPSyncJobs does not complete something will fail in the following ETL Jobs.
    Solution would have been here: http://technet.microsoft.com/en-us/library/jj614520.aspx
    In this stage, frankly spoken if you have still all the Data in your CMDB and nothing gone bcs of grooming, deinstall all DWH components
    (DM-MgmtServer, Databases) and reinstall DW completely.
    R.

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • Service Manager Data Warehouse Install - Analysis Server Configuration For OLAP Cubes Fail

    Hello everyone,
    I have an issue with my installation of the Data Warehouse for System Center Service Manager 2012 SP1.
    My install environment is the following:
    Windows Server 2012 – System Center Service Manager (Successfully Installed) - Virtual
    Windows Server 2012 – System Center Data Warehouse (Pending) - Virtual
    Windows Server 2012 – MS SQL Server 2012 – Physical, Clustered 1<sup>st</sup> of Four Servers
    The SQL Server is a clustered installation with named instances, specifically for SharePoint and Service Manager. Each instance has its own IP address and dynamic ports are turned off. I’m installing using the domain administrator account and I also chose
    to run the installer as administrator. The domain admin has sysadmin rights to the service manager server and instance I’m trying to install on. However, the account does not have sysadmin rights to some of the other instances.
    The install is smooth up until it needs to connect to the Analysis server database. I have tried connecting to the analysis servers on other SQL servers on site and all were successful. The only difference between the older SQL servers, the SQL 2012 development
    server and the SQL 2012 Production server I’m trying to install to is that the that the domain admin account doesn’t have sysadmin access on all the databases on the new production server. The SQL server is being installed and configured by a contractor so
    if you all have troubleshooting suggestions, I’ll need to coordinate with the contractor.
    Starting with the screen below, I began searching for help online. There seems to be no one else with this issue or it is not documented properly. I opened a ticket with MS, called the contractor and troubleshot with him, troubleshot as far as I could on
    my own and I’m still at a loss as to what is preventing the installer from connecting specifically to the analysis server.
    I first thought the installer was at issue or that the data warehouse sever was at issue. But all signs are pointing at the SQL server. The installer is able to connect to all the other SQL servers – including other 2012 servers (same versions) – so it can’t
    be the installer. I’m pretty sure the SQL server is going to be at issue.
    After looking at this error, I opened the resource monitor and clicked the dropdown to see if it was trying to connect to the correct server and it was. I then connected to the old and new test and development servers successfully. Then connected to the
    SQL 2008 R2 production cluster successfully. I then compared the two servers. The only difference other than the version numbers is that the admin account doesn’t have sysadmin rights on all the SQL 2012 database servers. But the database servers are not the
    problem. The analysis servers are.
    I then checked the event logs and they are empty as far as this issue is concerned. Actually, there are no errors on the SQL 2012 production box and the Data Warehouse box. I then checked the log that the installer creates during every step of the installation
    and this is what is created when the dropdown is clicked for the analysis server configuration screen. The log file location is:
    “C:\Users\admin\AppData\Local\Temp\2\SCSMSetupWizard01.txt”
    In the file is the following text.
    01:03:34:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012
    01:03:34:Using SQL Server 2012 management scope on SCSMSQL2012
    01:03:36:Collecting SQL instances on server SCSMSQL2012
    01:03:36:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:36:Using SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:38:Found SQL Instance: SCSMSQL2012\PWGSQL2012
    01:03:38:Found SQL Instance: SCSMSQL2012\SCSMSQL2012
    01:03:39:Error:GetSqlInstanceList(), Exception Type: Microsoft.AnalysisServices.ConnectionException, Exception Message: A connection cannot be made. Ensure that the server is running.
    01:03:39:StackTrace:   at Microsoft.AnalysisServices.XmlaClient.GetTcpClient(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenConnection(ConnectionInfo connectionInfo, Boolean& isSessionTokenNeeded)
       at Microsoft.AnalysisServices.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession)
       at Microsoft.AnalysisServices.Server.Connect(String connectionString, String sessionId, ObjectExpansion expansionType)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetASVersion(StringBuilder sqlInstanceServiceName)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetSqlInstanceList(String sqlServerName, Int32 serviceType)
    I’m now investigating the issue according to this output, and decided to ask you all if you’ve run into this issue and found a resolution.

    I am running into same issue . But I don't anything in the instances section related to portipv6 . I do see in the listener section , I tried to remove it . But it comes up again . Please help
    <ConfigurationSettings>
    <Security>
    <RequireClientAuthentication>0</RequireClientAuthentication>
    <SecurityPackageList/>
    </Security>
    <Network>
    <Listener>
    <RequestSizeThreshold>4095</RequestSizeThreshold>
    <MaxAllowedRequestSize>0</MaxAllowedRequestSize>
    <ServerSendTimeout>60000</ServerSendTimeout>
    <ServerReceiveTimeout>60000</ServerReceiveTimeout>
    <IPV4Support>2</IPV4Support>
    <IPV6Support>2</IPV6Support>
    </Listener>
    <TCP>
    <MaxPendingSendCount>12</MaxPendingSendCount>
    <MaxPendingReceiveCount>4</MaxPendingReceiveCount>
    <MinPendingReceiveCount>2</MinPendingReceiveCount>
    <MaxCompletedReceiveCount>9</MaxCompletedReceiveCount>
    <ScatterReceiveMultiplier>5</ScatterReceiveMultiplier>
    <MaxPendingAcceptExCount>10</MaxPendingAcceptExCount>
    <MinPendingAcceptExCount>2</MinPendingAcceptExCount>
    <InitialConnectTimeout>10</InitialConnectTimeout>
    <SocketOptions>
    <SendBufferSize>0</SendBufferSize>
    <ReceiveBufferSize>0</ReceiveBufferSize>
    <DisableNonblockingMode>1</DisableNonblockingMode>
    <EnableNagleAlgorithm>0</EnableNagleAlgorithm>
    <EnableLingerOnClose>0</EnableLingerOnClose>
    <LingerTimeout>0</LingerTimeout>
    </SocketOptions>
    </TCP>
    <Requests>
    <EnableBinaryXML>0</EnableBinaryXML>
    <EnableCompression>0</EnableCompression>
    </Requests>
    <Responses>
    <EnableBinaryXML>1</EnableBinaryXML>
    <EnableCompression>1</EnableCompression>
    <CompressionLevel>9</CompressionLevel>
    </Responses>
    <ListenOnlyOnLocalConnections>0</ListenOnlyOnLocalConnections>
    </Network>
    <Log>
    <File>msmdredir.log</File>
    <FileBufferSize>0</FileBufferSize>
    <MessageLogs>Console;System</MessageLogs>
    <Exception>
    <CreateAndSendCrashReports>0</CreateAndSendCrashReports>
    <CrashReportsFolder/>
    <SQLDumperFlagsOn>0x0</SQLDumperFlagsOn>
    <SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
    <MiniDumpFlagsOn>0x0</MiniDumpFlagsOn>
    <MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
    <MinidumpErrorList>0xC1000000, 0xC1000001, 0xC100000C, 0xC1000016, 0xC1360054, 0xC1360055</MinidumpErrorList>
    <ExceptionHandlingMode>0</ExceptionHandlingMode>
    <MaxExceptions>500</MaxExceptions>
    <MaxDuplicateDumps>1</MaxDuplicateDumps>
    </Exception>
    </Log>
    <Memory>
    <HandleIA64AlignmentFaults>0</HandleIA64AlignmentFaults>
    <PreAllocate>0</PreAllocate>
    <VertiPaqPagingPolicy>0</VertiPaqPagingPolicy>
    <PagePoolRestrictNumaNode>0</PagePoolRestrictNumaNode>
    </Memory>
    <Instances/>
    <VertiPaq>
    <DefaultSegmentRowCount>0</DefaultSegmentRowCount>
    <ProcessingTimeboxSecPerMRow>-1</ProcessingTimeboxSecPerMRow>
    <SEQueryRegistry>
    <Size>0</Size>
    <MinKCycles>0</MinKCycles>
    <MinCyclesPerRow>0</MinCyclesPerRow>
    <MaxArbShpSize>0</MaxArbShpSize>
    </SEQueryRegistry>
    </VertiPaq>
    </ConfigurationSettings>

  • Sharepoint 2013 Reporting Services & OLAP Cubes for Data Modeling.

    I've been using PowerPivot & PowerView in Excel 2013 Pro for some time now so am now eager to get set up with Sharepoint 2013 Reporting Services.
    Before set up Reporting Services  I have just one question to resolve.
    What are the benefits/differences of using a normal flat table set up, compared to an OLAP cube?
    Should I base my Data Model on an OLAP Cube or just Connect to tables in my SQL 2012 database?
    I realize that OLAP Cubes aggregate data making it faster to return results, but am unclear if this is needed with Data Modeling for Sharepoint 2013.
    Many thanks,
    Mike

    So yes, PV is an in-memory cube. When data is loaded from the data source, it's cached in memory, and stored (compressed) in the Excel file. (also, same concept for SSAS Tabular mode... loads from source, cached in mem, but also stored (compressed) in data
    files, in the event that the server reboots, or something similar).
    As far as performance, tabular uses memory, but has a shorter load process (no ETL, no cube processing)... OLAP/MDX uses less memory, by requiring ETL and cube processing... technically tabular uses column compression, so the memory consumption will be based
    on the type of data (numeric data is GREAT, text not as much)... but the decision to use OLAP (MDX)/TAB (DAX) is just dependent on the type of load and your needs... both platforms CAN do realtime queries (ROLAP in multidimensional, or DirectQuery for tabular),
    or can use their processed/in-memory cache (MOLAP in multidimensional, xVelocity for tabular) to process queries.
    if you have a cube, there's no need to reinvent the wheel (especially since there's no way to convert/import the BIDS/SSDT project from MDX to DAX). If you have SSAS 2012 SP1 CU4 or later, you can connect PV (from Excel OR from within SP) directly to the
    MDX cube.
    Generally, the benefit of PP is for the power users who can build models quickly and easily (without needing to talk to the BI dept)... SharePoint lets those people share the reports with a team... if it's worthy of including in an enterprise warehouse,
    it gets handed off to the BI folks who vet the process and calculations... but by that time, the business has received value from the self-service (Excel) and team (SharePoint) analytics... and the BI team has less effort since the PP model includes data sources
    and calculations - aside from verifying the sources and calculations, BI can just port the effort into the existing enterprise ETL / warehouse / cubes / reports... shorter dev cycle.
    I'll be speaking on this very topic (done so several times already) this weekend in Chicago at SharePoint Saturday!
    http://www.spschicagosuburbs.com/Pages/Sessions.aspx
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • Oracle OLAP as OBIEE Data Source

    I've got a couple of questions regarding the use of Oracle OLAP (Analytic Workspace/Cube) as an OBIEE data source.
    First: As a general rule when creating a dimension, we create a total roll-up for the dimension i.e. "Total Product", "Total Geog", "Total Customer" etc... Generally, I don't create a total roll-up for time dimensions. When importing metadata from OLAP to OBIEE, OBIEE creates a "Total" level for all dimensions. Now, I understand why OBIEE does that; to support queries that might exclude one or more dimensions. My question is: what is the best method/procedure to deal with the extra "Total" level?
    Second: I would appreciate it if someone could explain this error for me: [nQSError: 59137] Filter level YEAR is below the projected level Total on dimension CMP_TIME while an externally aggregated measure is present. (HY000). I understand the words, but have no clue what OBIEE is trying to tell me. This error pops up constantly and I see no rhyme or reason that would cause it. the specific case above occurred when I clicked on the sort icon for a measure included in a report.
    Thanks,

    Mark,
    Thanks for the reply. However, I'm not sure I made myself clear. I have created a "Product" dimension in AWM (Analytic Workspace Manager) in the following structure: Product -> Product Line -> Total Product. Withing the context this hierarchy, "Total Product" is the "Grand Total" Level. When this data is imported into OBIEE using "Oracle OLAP" as a data source, the Product hierarchy is created in the Physical Layer as an "Oracle OLAP Dimension". In the BMM Layer, the hierarchy is structured as: Product -> Product Line -> Total Product -> Total. There are now two "Total" Levels. Naturally only one, the OBIEE generated Total, is defined as "Grand Total". The only child of the Total level is Total Product. I have two hierarchy levels that are the same. So, do we need both? should we keep both? Should a dimension defined within AWM for use in OBIEE NOT include a total level? It's not really a problem, it just doesn't seem to make any sense to have TWO total levels within a hierarchy.
    On the second issue, I wish I could provide some detail, but I'm really not sure how I'd do that. That's why I asked for the meaning of the error. What is OBIEE telling me that I'm doing wrong. All I really did was import the metadata, drag it to the BMM Layer, deleted some of the hierarchy level keys, renamed some columns and dragged the stuff over to the Presentation Layer. So, it's pretty much drag-and-drop.
    Another example of the error: We have a Category Dimension (Sub Category -> Category -> Category Group -> Model -> All Categories -> Total) and I want to see the top 10 values of a measure by Category by Model. In an Analysis, adding the Model column works fine, just not the best visualization. Move the Model column to "Sections" and all works; move the Model column to Pivot Table Prompts and it errors. Obviously, I'm asking OBIEE to do something it doesn't want to do, so I'm looking for the root cause of the error.
    Thanks,

  • What is the difference between olap and data warehouse..?

    Hi All,
    Is their any difference between olap and data warehouse..? Please let me know your knowledge about these. Thank you..
    ------------------------------------------------------------------------ Please mark it as complete, if you get the solution with this reply. TQ.

    A data warehouse is a database containing data that usually represents the business history of an organization. This historical data is used for analysis. Data in a data warehouse is organized to support analysis rather than to process real-time transactions
    as in online transaction processing systems (OLTP).
    OLAP technology enables data warehouses to be used effectively for online analysis, providing rapid responses to iterative complex analytical queries. OLAP's multidimensional data model and data aggregation techniques organize and summarize large amounts
    of data so it can be evaluated quickly using online analysis and graphical tools.
    Reference:
    http://technet.microsoft.com/en-us/library/aa197903(v=sql.80).aspx
    http://stackoverflow.com/questions/18916682/data-warehouse-vs-olap-cube
    If this post answers your query, please click "Mark As Answer" or "Vote as Helpful".

  • After processing cube, no data when browsing

    Hi, I copied an Analysis Services (2005) solution project on my hard drive.  I then open it and change the data source to point to a data warehouse.  When I process the cube, there is no error but I have no data when I browse.
    If I create a new cube off of the same data source, it works well.
    Any help is appreciated.
    Thanks!

    Hi Eric,
    recheck the other 3 partitions, specifically their Slice and Source properties. See if there's something wrong. Try to execute all queries. See if slices are valid (or stated at some partitions but not for the others, like 2009). Then check those properties for your new 4th partition. Also, you said that when you make a new cube, everythings fine. Compare properties for partitions in both cases, maybe that gives some clue.
    You could also delete the old partitions, just for testing purposes. So that you only have one.
    Try full processing.
    I'm not sure this will help. I don't know how thoroughly have you checked it. If you're sure you're fine with partitions, check measure group bindings and DSV, as adviced by others.
    Regards,
    Tomislav Piasevoli
    Business Intelligence Specialist
    www.softpro.hr

  • Data in ODS, Info cube ans Multiprovider(List cube) are in Sync.

    Hi,
    My query is built on multiprovider. The data flow is data source u2013 ODS then ODS u2013 Info cube and multiprovider contains  Info cube only.
    Data in ODS, Info cube ans Multiprovider(List cube) are in Sync.
    The query results are not tie up ODS, Info cube ans Multiprovider(List cube).
    Any one let me know why this is happening and how do I resolve it.
    Regards,
    Sharma.

    HI;
    thanks for help.
    I resolved the issue in my own.
    Regards,
    Sharma.

  • Unable to update the Data from Cube to Data Mart

    Hi,
    I have a problem with the data loading to a Cube(data Mart)in BW. When i checked in RSA3 it is showing 0 records.The data flow is depicted as follows: R/3 -> ODS(BW)-> cube(BW) -> Cube(BW- Data mart for APO cube)-> APO System. In BW, for the final Data target whenever data is loaded, by deleting the previous request through full load. But on checking this final cube (Data Mart to APO) records are avilable, wherein while checking in RSA3 for this final data target(Data Mart to APO)it is showing 0 records.
    Why? please help me.
    Regards,
    krishna

    Hi,
    I checked the data mart in RSA3.It is not a matter of Full upload or delta upload.
    thanks
    Krishna

  • Data not loading to cube copied from 0sd cubes

    hello everyone,
    wen ever im trying to load data from r/3 into a cube which is created by copying from a 0sd cube, everything is running successfully, info package, dtp, transformations and all, but when i look into the contents tab, it says this is not a basis cube, can anyone explain pls.
    thanks in advance

    Hi
    Some one changed the settings of your cube from Basic cube to transactional cube ( real time infocube in IP)
    Change it from transactional cube back to basic cube..This error will not come
    Regards
    N Ganesh

  • Data in cube is different from psa in the production system

    hi friends
    this is very urjent ,the data is fine and  same as r/3 in psa.for example i have sales for one article ie billing
    2lis_13_vditm . which picked the data from r/3 when i see the records in the psa they r good when tried to see the same record in cube record is not avalible in the cube . only few records are filtered out inbetween psa to cube which is leading to lot of data inconsistency . they r no routines which can filter out the data . only standard sap routines .which updates the data to cube . what could be the problem . any help is apperciated and helpful and will be rewarded. thanks in advance for kind replays.

    veda,
    In a cube the data gets added up for similar records -
    Do you have the same number of records in PSA and Cube ?
    if yes - then maybe there exist similar records and the KF is getting summed up in the result.
    Also how did you search for the same record in the cube ? since the characteristics go into the dim tables and all that is in the fact table is only dim ids and KFs ...
    Do one thing - can you try a report on the cube and check if the data is getting summed up.
    Or another workaround - put the PSA Data into an Excel file ( flat file ) and then upload it into an ODS with the same records in Production - you will know if multiple records exits and thereby find out what the problem is due to. ( Bad workaround - but cannot do it in Production)
    Arun

  • Data mart cube to cube copy records are not matching in target cube

    hI EXPERTS,
    Need help on the below questions for DATA Mart-Cube to cube copy(8M*)
    Its BW 3.5 system
    We have two financial cube.
    Cube A1 - Sourced through R/3 system (delta update) and Cube B1- Sourced through A1 cube.(Full update). These two cubes are connected through update rules with one to one mapping without any routines.Basis did a copy of back-end R/3 system from Production to Quality server.This happened approximately 2 months back.
    The Cube A1 which extracts delta load from R/3 is loading fine. but for the second cube, (extraction from previous cube A1) i am not getting full volume of data instead i m getting meagre value but the loading shows successful status in the monitor.
    We  tried through giving conditions in my infopackage (as it was given in previous year's loading) but then also its fetching the same meagre volume of data.
    To ensure that is it happening for the particular cube, we tried out in other cube which are sourced thro myself system and that is also getting meagre data rather than full data..
    For Example: For an employee if the data available is 1000, the system is extracting randomly some 200 records.
    Any quick reply will be more helpful. Thanks

    Hi Venkat,
                  Did you do any selective delitions in CUBEA1.
    first reconcile data cube1 & cube2 .
    match totals of cube1 with cube2.
    Thanks,
    Vijay.

  • Funds management cube getting data from 4 ods, data not seen in cube

    Hello all,
    I am working on funds management cubes (0PU_C02 and 0PU_C03). These two cubes get data from like 7 or 8 ODS. Can someone tell me which are the ODS which feed to respective cubes. I have the mappings done but i am getting very confused.
    In the ods 0PU_O44 there is RLDNR for Ledger and I can see that data in ODS but when i run load to cube I dont see the ledger data (its direct mapping there are no routines or rules).
    Is there any sequence with wihic we have to load the data?
    Thanks
    Message was edited by: Raj M

    Hi Raj,
    I was going thru the forums who has knowledge in Public sector when i came accross ur forum.Can i get your email id..i have some questions to ask regarding student administration.
    thanks.

  • Adding new infoobject to cube with data

    Hi experts
    Our cube contains data and is fully compressed (E-table), there is are no requests in F-table.
    We need to add 2 key figures and 2 characteristics.
    The characteristics get a new dimension table.
    We don't use the remodel technique. We add these new infoobjects.
    There is no need for a reload of historical data for these new objects.
    We only need data starting from the first load date after the changes occured.
    Are there any errors in this approach?
    Thanks

    Thomas,
    When you are adding a new dimension to your cube - you are adding a primary key to your E and F Fact tables - therefore when your transport goes it - it will reorganize your cube tables.
    For this - assuming that you are collecting the changes in the normal way and not changing any settings ....
    1. Check the UNDO table space in target system , the UNDO table space should have enought free space available that will be greater than the size of your E and F Fact table sizes - because , the way the transport moves is that a copy of the table is taken in the undo table space in the database and then the tables are adjusted - if your undo table space is insufficient - the transport will fail after some time
    2. If this is a big cube - then the transport will take time to go in - if BASIS imports transport requests in sequence - then this transport will hold the queue and might delay others whi have smaller transports.... make sure that your transport goes in last if there are others waiting for you to complete
    3. Also inform BASIS that the transport is going to take time - sometimes long running transports can get the BASIS team excited :-)
    These are not technical but some bases that can be covered in advance...
    Hope this helps.,...

Maybe you are looking for