Load cube 0COOM_C02

Hi experts,
I'm using cube 0COOM_C02, and I'm trying to define the mechanism to load it.
I'm not using any DSO, and the IS I want to load are:
- 0CO_OM_CCA_1
- 0CO_OM_CCA_9
- 0CO_OM_OPA_1
- 0CO_OM_OPA_2
- 0CO_OM_WBS_1
- 0CO_OM_WBS_2
I wonu2019t be loading  0CO_OM_NAE_1,  0CO_OM_NAE_2,  0CO_OM_NTW_1, 0CO_OM_NTW_2, 0CO_OM_NWA_1 and 0CO_OM_NWA_2, as Network* information is not being used in R3.
ODS is not being used
From  sap help I got the following regarding the loading of these IS:
1) Create an InfoPackage for InfoSources 0CO_OM_CCA_9, 0CO_OM_OPA_6, 0CO_OM_WBS_6, 0CO_OM_NTW_2,  0CO_OM_NWA_2 and 0CO_OM_NAE_2 to load the actual data.
Case A (with delta process): If you use the delta process, you can load your actual data at any time. You do not need to enter the period.
Case B (without delta process): If you do not use the delta process, it is best to load your actual data at the end of the period (after completion of the period-end closing process) by restricting the selection to the expired period.
SAP recommends the delta process.
2)To load the remaining data (such as plan, target or period-end closing) for InfoSources 0CO_OM_CCA_1, 0CO_OM_OPA_1, 0CO_OM_WBS_1, 0CO_OM_NTW_1, 0CO_OM_NWA_1 and 0CO_OM_NAE_1, create a separate InfoPackage for each type of data. This is necessary because of the different data transfer time points. Restrict selection to the relevant value type (such as plan) and the period or periods (such as 1 u2013 3 for the first quarter or 1 u2013 12 for the full year).
From your experience in implementing this cubes, I would like to confirm the following:
Q1: I assume that  1) will be used just to load actual data. If I use delta process (which is what I want), I can load data for actual everyday for exemple. Will the remaining cases (2) just be used to target, plan and commitment. Am I correct?
So IP mentioned in 1) only actual data will be loaded, correct? No restriction has to be done in version at IP level?
Q2: Regarding other than actual data (I only have planned), here is my scenario
Cost Center u2013 plan defined once a year (each end year)
WBS u2013 plan defined every time a WBS is created. I will execute reports every month
Order u2013 plan defined every time a order is created. I will execute reports every month
Changing the plan data of a Cost center, WBS or order, might happen
Taking this scenario into account, I came up with 2 possibilities to create plan data loading PC (to be run automatically every day 1 of each month)
IP created
Cost Center u2013 IP1 for 0CO_OM_CCA_1, with version and type restricted (to have plan data)
WBS u2013 IP2 for 0CO_OM_WBS_1, with version and type restricted (to have plan data)
Orders u2013 IP3 for 0CO_OM_OPA_1, with version and type restricted (to have plan data)
OptA
Del Index
Execute IP1 (load data to the cube)
Delete Overlapping request
Execute IP2 (load data to the cube)
Delete Overlapping request
Execute IP3 (load data to the cube)
Delete Overlapping request
Generate index
Q2.1: Iu2019ve seen this option (Delete Overlapping request) is another thread, but I donu2019t know much about it. Can somebody explain how it works and how will affect the loading performance?
OptB
Del Index
Execute IP1 (load data to the cube), routine to restrict by fiscal period (month year)
Execute IP2 (load data to the cube), restrict by fiscal period (month year)
Execute IP3 (load data to the cube), restrict by fiscal period (month year)
Generate index
Q2.2: I think this might work better in terms of performance, but I donu2019t know if and how is possible to create a routine to make it load just last months records. Also, is older data changes, this wonu2019t be an option. Can you advise on this solution?
Q3: I will also be creating a PC to load actual data (delta). Do I have to include both this chains in a metachain, or can I have 2 separate PC?
Many thanks
Regards
Joana

Hi,
Q2: IS 0CO_OM_WBS_1. I just want to extract plan from here. Ideally, i would do a Init + IP for this. But it's not possible. Just full IP: I want to be able to load this every month, without duplicating cube data.
It would be possible to load, by filtering fiscal period (previous month) + plan data. But i'm afraind they might chage older data, and by loading just previous month I would loose those records.
So that's why i thought of always loading every plan data, deleting previous request from cube.
Its depending on the volume of the data, If the data volume less you can delete the last request  and then full load(since it is monthly not a big issue)
*IF the data volume is huge then you can go for selection in the info package(full with previous month) 
Generally once the plan data is published then it would not have changes*
If the data volume is manageable then you can go with full and delete the previous request
By doing that, on PC, will it have any impact on actual data in the cube? NO
Q3: Can I havw 2 PC: one for actual (based on 0CO_OM_WBS_6) and one for plan (based on 0CO_OM_WBS_1)? Or do I need Metachain? I don't think so, because i'm running them in different times (one everyday, the other every month.
2 separate chains require since one is daily and another one is monthly
For this cube: 0COOM_C02, are you aware of another solution to improve loading performance
At this point of time this model will give a ideal performance once data volume is increased the you need performance method
Regards,
Satya

Similar Messages

  • LOADING ACTUAL COSTS FOR RE OBJECTS TO CUBE 0COOM_C02

    Hi,
    I use BC cube - 0COOM_C02. There is DS for loading costs of objects like cost centers, wbs, internal orders, network but there is no DS for loading costs of RE objects. At the moment,it is possible to see RE object like buildings only as part of the sender side (the origin of the cost).
    Of course table COEP in R/3, which is the source of that data, does contain RE objects as part of the reciever side.
    Is there a way to extract these records to this cube?
    regards,
    Hadar

    Hello Scooby,
    The number of records are bound to be less as there is a aggregation across request ids happening in the target z cube while in the lower cube the requests are not getting aggregated ..
    Best and the only way to check is to use a query and check result sets by including and excluding the requids.
    Please dont forget to grant some points ..
    Regards
    Nikhil

  • How to add fields to already loaded cube or dso and how to fill records in

    how to add fields to already loaded cube or dso and how to fill  it.can any one tell me the critical issues in data loading process..?

    This is sensitive task with regards to large volumes of data in infoproviders.
    The issue is to reload of data in case of adjusted structures of infoproviders.
    Indeed there are some tricks. See following:
    http://weblogs.sdn.sap.com/cs/blank/view/wlg/19300
    https://service.sap.com/sap/support/notes/1287382

  • Issue with data while uploading data form cube 0COOM_C02

    hi,
    i am uploading data from infocube 0COOM_C02(costs cube).
    the infocube receives the data from infosources 0COOM_CCA_9 and 0COOM_OPA_6.
    While i transfer the data records from the infocube 0COOM_C02 to another Zcube of the same structure as the prior one via data mart.
    isee that some of the data records are missing and the total costs for a costcenter is different in both infocubes.
    has some has faced similar issue before??? are there any prerequisites before the data from costs cube 0COOM_C02 to be transferred to another cube via data mart.
    pls clarify
    Subramanian

    Hello Scooby,
    The number of records are bound to be less as there is a aggregation across request ids happening in the target z cube while in the lower cube the requests are not getting aggregated ..
    Best and the only way to check is to use a query and check result sets by including and excluding the requids.
    Please dont forget to grant some points ..
    Regards
    Nikhil

  • Loading Cube From Flat File

    Hi All,
    Ex: If cube is loaded from Flat file , say with 10 records and during next load only 1 record is changed.
    Is it possible to load cube with all 10 records without deleting previous request which should not result in duplicate records in cube?

    Hi.......
    Infocube is always additive...........so there will be Duplicate records.........Only DSO has the overwritten option.......So for this u hav to go for DSO.........
    But if u need to load in infocube only..........then u can do one thing..........
    Start process
    Delete Traget Content(Process : Complete Deletion of Data Target Contents )
    Load infocube
    Then first content of the cube will get deleted..........then cube will be loaded.........this will solve ur problem......
    Hope this helps.....
    Regards,
    Debjani.......

  • How to add a time characterstic(date) to already loaded cube??

    Hi All,
    We have a cube loaded(by daily deltas) for more than two years now, how can we add a date to this cube, what are the possibilities??
    1. do we need to delete all data,add that date ,reload??( quite tedious time consuming ,out of option for now)
    2. jut add that date to new dimension, load from now on?? but we may need data from past for that characterstic.
    3. if we copy that cube to another cube, add that char to old cube, then reload data from copied cube to old cube. is it possible to load all the data for that new filed for last two years??
    Please let me know the successful solutions as iam doing it in production system.
    Regards,
    Robyn.

    Hi Eric,
    ya that might be possible,but that charaterstic cannot be derived from the data present in cube as it needs to be derived from R/3 by changing the extract program of that cube.
    the plan is to create new cube with similar structure of old one plus this new date field,populate this new cube for say required years data, then generate query as using these two cubes in multiprovider.
    do u have any alternate solution for this
    Regards
    Robyn.

  • Error Caller 09 contains error message - Data Marts loading(cube to ODS)

    Dear all,
              Please ! Help me in this problem, This is very urgent.
              I have one process chain that loads data from BIW to BIW only through Data Marts. In that process chain, one process loads data from one cube(Created by us) & loads data to one ODS(also created by us). Data is loaded through full update & for the selected period specified in 'Calender Day' field in data selection.
             Previously I was able to load data for 2 months, but some days ago, suddenly one day, the process of Extraction got stuck in background for long time,& showed following error :
              Error message from the source system
              Diagnosis
             An error occurred in the source system.
              System Response
             Caller 09 contains an error message.
             Further analysis:
             The error occurred in Extractor . 
             Refer to the error message.
             Procedure
             How you remove the error depends on the error message.
             Note
             If the source system is a Client Workstation, then it is possible that the file that you wanted to                load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
                  Then we killed that process on server & after another attempt, It showed some calmonth...timestamp error. Then after reducing data selection period, It had been loaded successfully, after that I was able to load data for 20 days,Again after some days process got stuck,I followed the same procedure,reduced the period to 15 days & continued, Now I can't even load data for 5 Days successfully in one attempt, I have to kill that process in background & repeat it, then sometimes It get loaded.
             Pls, suggest some solutions as soon as possible. I am waiting for your reply. Points will be assigned.
             Thanks,
              Pankaj N. Kude
    Edited by: Pankaj Kude on Jul 23, 2008 8:33 AM

    Hi Friends !
                      I didn't find any short dump for that in ST22.
                      Actually , What happens is, Request continues to run in background for infinite time. At that time
    Status Tab in Process Monitor shows  this messages :
                        Request still running
                        Diagnosis
                        No errors found. The current process has probably not finished yet.
                         System Response
                         The ALE inbox of BI is identical to the ALE outbox of the source system
                           or
                         the maximum wait time for this request has not yet been exceeded
                           or
                        the background job has not yet finished in the source system.
                       Current status
                       in the source system
                        And Details Tab shows following Messages :
                        Overall Status : Missing Messages or warnings
                        Requests (Messages) : Everything OK
                                Data Request arranged
                                Confirmed with : OK
                         Extraction(Messages) ; missing messages
                                Data request received
                                Data selection scheduled
                                Missing message : Number of Sent Records
                                Missing message : selection completed
                        Transfer (IDOCS and TRFC) : Everything OK
                                Info Idoc1 : Application Document Posted
                                Info Idoc2 : Application Document Posted
                         Processing (data packet) : No data
                        This Process runs for infinite time, then I have to kill that process from server, & Then It shows  Caller 09 Error in Status Tab
                        Should I Changed the value for that memory parameter of server or not ?. We r planning to try it today, Is it really belongs to this problem, Will it be helpful ? What r the risks ?
                        Please, give your suggestion as early as possible, I m ewaiting for your reply.
      Thanks,
    Pankaj N. Kude

  • After upgrade to BI NW4.0S issue in loading cube from ODS

    Hi,
      I have an SAP delivered ODS(0CRM_OPPT_H) which I was loading for some time. The cube 0CRM_C04 was loaded from the ODS a couple of times. I have now deleted all the data from the cube and want to load the data from the cube to ODS, In between our BW was upgrade frpm 3.5 to BI NW0S..
      I have done all prerequisite , replicated the 80 datsource , deleted and re activated the update rule and transfer rule again.. and then tried to load the Cube from ODS.. however everytime I am getting the following error
    DataSource 80CRM_OPPI does not have the same status as the source system in the Business Information Warehouse.
    The time stamp in the source system is 12/23/2005 10:09:36.
    The time stamp in the BW system is 11/09/2005 13:02:29.
    Is it due to the upgrade?? I have done everything so that this dosen't happen, still could not resolve..
    Thanks

    Are you sure you've replicated the DataSource 80CRM_OPPI? Try replicating it individually again. If that doesn't help, have you tried reactivating the ODS Object itself?
    Regards, Klaus

  • Error when loading cube

    I get the following error repeatedly when trying to load data into my copa cube;
    InfoObject SALESDEAL does not contain alpa-conforming value 89866
    The value (shown here as 89866) could be any number between 1 and 99999.  No matter what the number happens to be I'm getting this type of error.
    FACTS: My infoObject (0SALESDEAL) length is set to 10 and the conversion routine is ALPHA which should pad with leading zeros.
    Any ideas why this is generating an error upon loading?  Should I turn off the conversion routine?
    Thanks!

    Patrick,
    This was happend because of Alpha Conversion as buddies said before.  Unfortunatley it happend on this infoobject and you will find some more as and when they get..
    by default BW expects that R/3 will send the Alpha converted values.. (if the infoobject is 10 digits/chars ..r/3 will send exactly the same) .. if data gets loaded like say from flatfiles, the system will do it for you..
    by checking the box in the Transfer Rules, we are proposing to the system to take care of alpha conversion for that infoobject even if the data comes from R/3..
    hope I am not confused you..
    Regards
    Mavi

  • Data load buffer [1] does not exist error when loading cube

    Hello
    I'm trying to load data using Essbase Studio. Everything was working fine, the load used to occur very quickly.
    After updating the SQL query to load a larger period ( 5 years instead of 2 ), the load fails after exactly 10 minutes ! I'm wondering if there's a timeout configuration or something similar in Essbase because I have other cubes that has large queries that are stopping after exactly 10 minutes.
    The environment is:
    Essbase 11.1.2.1 on Linux 64 bits
    Data source: Oracle 10r2 database
    This is Essbase Studio Log
    Failed to deploy Essbase cube
    Caused By: Failed to load data on database: CartTst.
    Caused By: driver.DriverException.GetDatabaseInfoFailed
    Caused By: IO Error: Socket read timed out
    Caused By: Socket read timed out
    This is the Cube Application Log
    [Thu Jul 19 18:34:38 2012]Local/CartTst///1136855360/Info(1013210)
    User [ERS0411@AD_Directory] set active on database [CartTst]
    [Thu Jul 19 18:34:38 2012]Local/CartTst/CartTst/ERS0411@AD_Directory/1114749248/Info(1013091)
    Received Command [LoadBufferTerm] from user [ERS0411@AD_Directory]
    [Thu Jul 19 18:34:38 2012]Local/CartTst/CartTst/ERS0411@AD_Directory/1114749248/Error(1270040)
    Data load buffer [1] does not exist

    I am getting the same error: Data load buffer [2] does not exist
    When I changed the load rule it works fine.
    Thanks Glenn

  • Flat file  load - Cube

    Hi Experts,
    I have loaded data to cube through Flat file in BI7. The request is in green at manage option and reportable but the transferred and added records are 0. Please help me out.
    Regards,
    Ravi

    Did you run delta dtp from PSA to Cube ? which was already loaded I guess check in cube
    If the request from PSA already loaded to cube and next time when you run delta dtp again it will fetch 0/0 records...
    Are you sure you have loaded PSA with new request which was not yet loaded to cube?
    Edited by: Srinivas on Aug 6, 2010 5:07 PM

  • Suggestions for eliminating downtime while loading cubes

    Hi all,
    we are currently looking at multiple daily refreshes of one of our Essbase cubes. One of the challenges I'm currently facing is that the cube is down while it's being loaded. Any users who try to query the cube during this downtime receive the message "Cannot proceed while the cube is being loaded".
    I'm sure there must be a way to set up an environment where the cube database is not unavailable during loading times. For example is it possible to copy the cube database to another location after a build, then "deploy" the cube to users from there? That way when the cube is being reloaded, it would only affect the "deployed" cube when the build is finished and it's being copied to the deployment area.
    I would welcome advise from anyone on how to avoid this kind of downtime as it will be unfeasible to implement multiple daily refreshes (the build already takes about an hour and a half... that's a lot of down time already) without some kind of strategy whereby the previous build is only discarded completely once the current build is ready.
    I'd greatly appreciate any advice.
    Thanks and kind regards,
    Mickey G

    Hi Mickey,
    The better solution for that will be having 2 databases in the Applications. For example, Basic1 and Basic2.
    Now users in their business hours entered data in Basic1 database. Before doing cube build,You have to archive all data in a text file. Now in the cube build load step, swap the cubes(like a=b and b=a. Here, you have to disconnect the connections for Basic1 and shift the connections to the Basic2 database ). By doing this, Basic2 database is available to the users.(Here,both database should be in synch with outline structure) and they can do the transactions on Basic2 database.
    Basic1 database is now build with fresh data.
    Basic2 database is carrying one day old data.
    And once the cube build is done, swap the cubes.
    Please let me know if you understand it or not. I wil try to explain in a better manner.
    Thanks & Regards,
    Upendra.

  • Need help troubleshooting poor performance loading cubes

    I need ideas on how to troubleshoot performance issues we are having when loading our infocube.  There are eight infopackages running in parallel to update the cube.  Each infopackage can execute three datapackages at a time.  The load performance is erractic.  For example, if an infopackage needs five datapackages to load the data, data package 1 is sometimes the last one to complete.  Sometimes the slow performance is in the Update Rules processing and other times it is on the Insert into the fact table. 
    Sometimes there are no performance problems and the load completes in 20 mins.  Other times, the loads complete in 1.5+ hours.
    Does anyone know how to tell which server a data package was executed on?  Can someone tell me any transactions to use to monitor the loads while they are running to help pinpoint what the bottleneck is?
    Thanks.
    Regards,
    Ryan

    Some sugegstions:
    1. Collect BW statistics for all the cubes. Goto RSA1 and go to the cube and on tool bar - tools - BW statistics. Check thed boxes to collect both OLAP and WHM.
    2. Activate all the technical content cubes and reports and relevant objects. You will find them if you search with 0BWTC* in the business content.
    3. Start loading data to the Technical content cubes.
    4. There are a few reports out of these statistical cubes and run them and you will get some ideas.
    5. Try  to schedule sequentially instead of parallel loads.
    Ravi Thothadri

  • Load cube data at multiple levels

    Hi All,
    I have a time dimension with following hierarchies.
    all_time --> Year --> Quarter -->Month
    all_time -->Week
    My fact table contains data at week and month level. I mapped my fact table time_key to both levels(Week and month) in AWM but when I load data into cube AWM loading data only at week level and ignoring all moths facts rows in facts table. How can I load data at both levels?
    Olap Version: 10.2.0.4
    Thanks
    Dileep.

    I am trying to wrap my mind around your design. It appears that you have a fact table with two levels of granularity. Why not just have one hierarchy with a granularity at the week level?
    All Time --> Year --> Quarter --> Month --> Week
    Do you have some facts that relate only to a month level? If so, can you just relate the month total to one of the weeks in the month? Or is it acceptable to just have data at the month level and eliminate the week level?
    If your week and month data are mutually exclusive, you could run into problems. When you look at data rolled up to the year level, for example, it would exclude the data that was inserted at the week level.
    When I think of multiple hierarchies of time, I first think of calendar years versus fiscal years. The two hierarchies are completely unique due to a business requirement, yet they are clearly related as measures of time (years, months, etc.). But a fiscal month never rolls up into a calendar quarter, or a calendar day into a fiscal week or year, because the hierarchies are distinct. But the fact table is at only one level of granularity (day or week or month), no matter how it rolls up into the separate hierarchies.
    At first glance, your two hierarchies look like they should be one.

  • Loading cube data

    Hi Exprets
    We have a cube in production from which we would need to use data for a new cube.
    How do we load the entire cube data (including compressed data) to the new cube?
    Thanks Much
    Jason

    Hi Jason,
    1.) Create update rules for the new cube from the old cube.
    1.2) In update rule give the old cube name and maintain mapping.
    2.) In infosource -> Create InfoPackage under the Data marts for the infosource with name 8XXXXXX(8Source cube name)
    3.) Perform full load.
    Regards,
    Mohan

Maybe you are looking for

  • How to display data in Table control?

    Hi Experts, Can anyone please explain me how to display the data from two different tables(those two table is related with 1 field) into a single table control? For Example: T1 has fields (F1,F2) and                      T2 has fields (F3,F4) --> her

  • Program won't launch on Mac for one user but will for another

    We just installed a new program (that is Java based) that we are about to start using in our office. It is the WorkFlows client from SirsiDynix. It works fine on my MAC but 2 other users in our office have MACs and it won't launch. We have tried unin

  • Good or Bad Design

    The following is the design for a reporting module. Any suggestions on the same would be appreciated. 1) ChartGenerator implements Generator. Attributes � private int ncompID = UNDEF      private int njobId = UNDEF      private String storedproc = ""

  • Apple Mail Infinite Incoming Messages

    I have 9 email accounts setup in Apple Mail. They all worked on various Macs and iOS devices until the recent upgrades to iOS7 and Mavericks. On my Macs, all with Mavericks via a completely clean install, I am having the following issues: * Mail usin

  • Using Spry how to stop Dreamweaver CS3 from Changing Code

    Hi   I am using Dreamweaver CS3 to build/maintain a website www.ghoter.net. I started using Spry to insert data from an xml file as I need the data in the xml file to show up on several different pages, includeing the home page. The data in the xml f