Data movement between cubes.

hi experts,
Am facing problem while moving data from two different cubes in to another cube. Am able to map all the fields but while executing it is giving some process error.. Please let me know how to do this...
with regards,
Latha Shree

Inorder to load into an infocube from 2 other cubes, create a transformation between final cube & first cube(frm where you are extracting data) with, first cube as the source & final cube as the target. Map respective fields and activate the transformation. Then create DTP and activate it.
similarly, create one more Transformation betwn second nd final cube and DTP and activate it.
Then execute the two DTPs which will load the data from first and second cubes in to the final cube..
U can also refer the following link for more info...
BI to BPC Mapping

Similar Messages

  • Data mismatch between Cube & Report

    Gurus,
    I am running a 'Sales -forecast & Commitment Analysis' report. For some reason, when I drill down to the Profit center, Major category & material level, my individual line items do not sum to the ‘Result’ row i.e. there is a mismatch betwen the line iten numbers & the result.
    However I checked with the listcube functionality of the cube. I see no fault with the data in the cube.
    I also sat with the Basis/DBA guys, but i got to know there isn't any fault with the DB side or with indexing either. I have checked the report, the aggregates aswell. I am not able to make out anything.
    Guru Plz suggest a solution.
    Thanks & Regards,
    Shree

    Hi,
    In your query click on the particular key figure and select calculations tab from the properties.
    check what you have in calculate result as over there. make it total and see if it solves the problem.
    Also check in the aggregation tab for any exception aggregation. in display tab see if any scaling factor is used. no scaling factor should be used genrally.
    Regds,
    Shashank

  • Correct tool for my purpose of data movement between servers

    Hello All,
    I am in the process of trying to copying over data from source sql server table to destination sql server table. The requirements being, only the new or updated data needs to be migrated to destination table, once a week. Our source table has 23 million
    rows and growing. 
    I researched two different solutions and would like to know if anyone has feedback on these. 
    1. Merge - used to sync data between source and destination table using SSIS packahe. But the problem for this according to my research, with the amount of data in consideration, the transactional log will grow by leaps and bounds. Not the way.
    2. Replication - I have started my research in this matter. Would this be an ideal solution? 
    Many thanks.

    Transactional replication is the best fit here. You should be able to get near real time synchronizations between your source and destination servers IF you have a pk on the table you are replicating.
    If your skill set is with SSIS the merge component will also work.
    If you backup your tlog every 20 minutes or so, your tlog will be maintained and you should not see explosive growth.
    Note that both the SSIS merge component and transactional replication will lead to large tlog growths unless you maintain your tlog.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Copy data between cubes

    I need to copy data between two cubes (through a business rule). Can I do it using Partition/Replication? If so, does anyone have an example on how it is done? I'm currently using @XREF, but that does not transfer data for blocks that don't already exist in target database.
    I'm very new to this so a detailed description will help.
    Thanks for your help.

    Yes, partitions are great. I like to use replicated partitions because I can control the data, and deal with integrity issues etc. Your usage may vary.
    Basically, you go to your "Source" database, go to the partitions menu, and "Create New Partition". You then walk through each of the tabs in the partition menu
    Type Tab: Choose the type. Let's say a "replicated" partition.
    Connection Tab: Choose what databases will be connected with the partition
    Areas Tab: I like to check the "Use Text Editor" to just type out the formulas. Also check the "Show Cell Count" to have some confidence your formulas are working as planned. Here you define what data moves from source to target. For example I might setup the following
    Source:
    ("Actuals Data"),
    @LEVMBRS("Province",0),
    @GENMBRS("Company",2)
    Target:
    ("Actuals Data"),
    @LEVMBRS("Province",0),
    @GENMBRS("Company",2)
    If the names don't match, you can adjust that in the advanced properties or mapping properties. (If you have multiple slices of data - use the advanced mapping).
    Now validate and save

  • Data load taking very long time between cube to cube

    Hi
    In our system the data loading is taking very long time between cube to cube using DTP in BI7.0,
    the maximum time consumption is happening at start of extraction step only, can anybody help in decreasing the start of extraction timing please
    Thanks
    Kiran

    Kindly little bit Elaborate your issue, Like how is mapping between two cubes, Is it One to one mapping or any Routine is there in Transformation. Any Filter/ Routine in DTP.  Also before loading data to Cube did you deleted Inedxes?
    Regards,
    Sushant

  • ABAP Function Module Example to move data from one Cube into Another

    Hi experts,
    Can any please help out in this ..?
    A Simple ABAP Function Module Example to move data from one Cube into Another Cube
    (How do i send the data from one client to another client using Function moduel).
    Thanks
    -Upen.
    Moderator message: too vague, help not possible, please describe problems in all technical detail when posting again, BI related? ("cube"), also search for information before asking.
    Edited by: Thomas Zloch on Oct 29, 2010 1:19 PM

    This is the start routine to duplicate records in two currencies.
    DATA: datew   TYPE /bi0/oidateto,
          datew2  TYPE rsgeneral-chavl,
          fweek   TYPE rsgeneral-chavl,
          prodhier TYPE /bi0/oiprod_hier,
          market  TYPE /bic/oima_seg,
          segment TYPE /bic/oizsegment.
    DATA: BEGIN OF S_DATA_PACK OCCURS 0.
            INCLUDE STRUCTURE /BIC/CS8ZSDREV.
    DATA: END OF S_DATA_PACK.
    S_DATA_PACK[] = DATA_PACKAGE[].
      REFRESH DATA_PACKAGE.
      LOOP AT S_DATA_PACK.
        move-corresponding s_data_pack to DATA_PACKAGE.
        if DATA_PACKAGE-loc_currcy = 'EUR'.
          DATA_PACKAGE-netval_inv = DATA_PACKAGE-/bic/zsdvalgrc.
          DATA_PACKAGE-CURRENCY = 'USD'.
          APPEND DATA_PACKAGE.
          DATA_PACKAGE-netval_inv = DATA_PACKAGE-/bic/zsdvalloc.
          DATA_PACKAGE-CURRENCY = 'EUR'.
          APPEND DATA_PACKAGE.
        else.
          DATA_PACKAGE-netval_inv = DATA_PACKAGE-/bic/zsdvalgrc.
          DATA_PACKAGE-CURRENCY = 'USD'.
          APPEND DATA_PACKAGE.
        endif.
      ENDLOOP.
    This is to load Quantity field
    RESULT = COMM_STRUCTURE-BILL_QTY.
    This is to load Value field
    RESULT = COMM_STRUCTURE-NETVAL_INV.
    UNIT = COMM_STRUCTURE-currency.

  • Data extraction from between cubes

    Hi gurus,
    I have an Infocube1 in BW System1, and Infocube2 in BW system2.I want to extract the data from Infocube1 to Infocube2.Can anybody say the steps to do
    Thanks
    Pradeep

    Hi Pradeep,
    You need to set the connections between two BW systems the way we connect R/3 systems to BW.
    Then to extract data from a infocube from BW system1:
    1. right click on the infocube
    2. Select "Additional Functions"
    3. Slelect "Generate Export DataSource" option
    This will create the export data source for this cube. You can use this datasource to extract data from this cube in BW system2.
    In BW system2:
    1. Go to list of SAP source systems
    2. Search for your source system (BW system2)
    3. Right click on this source system and select "Replicate DataSources"
    4. After replication is complete right click on the source system and select "go to datasource tree"
    5. Here search for the export datasource created in BW system1
    6. Create transformations between this datasource and your cube from BW system2.
    Regards,
    Geetanjali

  • Badi: move data from bpc cube to bw cube

    Hi friends,
    I would like to move data from bpc application(i.e bpc cube) to bw cube.  Bpc cube having infoobjects starting with /cpmb/* and bw cube having infoobjects starting with y* .
    I want to write badi code to move data from bpc cube to bw cube? is it possible to move data via badi in our case?
    I know through APD its possible to move.  Any suggestions pls. we are on bpc 75 nw sp04
    Thanks,
    naresh

    Hi Naresh!!!
    I am doing the same stuff...... and got the data read inside the BAdI and converted the comma seperated into a properly structured table... but i dont know how to send this field symbol structure to a normal table. can you help me!!!!??? can you give me your contact details.... please
    Regards,
    Surya Tamada.

  • Issue with Data Loading between 2 Cubes

    Hi All
    I have a Cube A which has huge amount of data. Around 7 years of data. This cube is on BWA. In order to empty out space from this Cube we have created a new Cube B.
    We have now starting to load data from Cube A to Cube B based on created on. But we are facing a lot of memory issues hence we are unable to load data for a week in fact. As of now we are loading based on 1 date which is not useful as it will take lot of time to load 4 years of data.
    Can you propose some alternate way by which we can make this data transfer faster between 2 cubes ? I though of loading Cube B from DSO under Cube A but that's not possible as the DSO does not have that much old data.
    Please share your thoughts.
    Thanks
    Prateek

    HI SUV / All
    I have tried running based on Parallel process as there are 4 for my system. there are no routines between Cube to Cube. There is already a MP for this cube. I just want to shift data for 4 years from this cube into another.
    1) Data packet as 10, 000 - 8 out of some 488 packets failed
    2) Data packet as 20, 000 - 4 out of some 244 packets failed
    3) Data packet as 50,000 - Waited for some 50 min. No extraction. So killed the job.
    Error : Dump: Internal session terminated with runtime error DBIF_RSQL_SQL_ERROR (see ST22)
    In ST22:
    Database error text........: "ORA-00060: deadlock detected while waiting for
      resource"
    Can you help resolving this issue or some pointers ?
    Thanks
    Prateek

  • ImplementationOpt Design[Opt 31-67] Problem for AXI bus between data mover ip and axi interconnect

    I'm using vivado 2014.4 and win7 64bit for my zynq design.  Previously, the design is good. I made some revisions. Then I came across the problem.
    If the synthesis strategies option is flow_runtime_optimatized(Vivadio synthesis 2014), everything works
    If the synthesis  strategies option is default options (the synthesis settings are shown in the picture), Synthesis is still fine, but the implementation failed. Some of the errror msg is shown below.
    The error message shows there are unconnected pins on axi_interconnect_1. The connections between AXI master from my own IP(it is the axi port from a DataMover IP) and axi_interconnected is shown in the attached picture.
    The synthesis schematic is also checked. The connections of the axi_interconnect have some pins unconnected as shown in the picture (interconnect_schematic_synth.PNG). The connections of my IP are good, but it misses some pins (like _arready, _rvalid, _ruser, _bid...).  The master AXI port in my port is from the data mover IP. By default, the data mover IP doesn't have these missed pins. The AXI port on my IP declares these missed pins, but actually they are not connected to any inside my IP. 
    Btw, previously, the project works well in my design. But now it doesn't.
     I also check the connections of the axi_interconnect when synthesis strategies option is flow_runtime_optimatized, the schematic show all pins are connected. 
    Please help. Thx.
    Sam
    ImplementationOpt Design[Opt 31-67] Problem: A LUT1 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/size_mask_q[3]_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/access_is_incr_q_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I1, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/access_is_incr_q_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/command_ongoing_i_2__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/size_mask_q[1]_i_1__0.
    [Opt 31-67] Problem:

     
    Hi Muzaffer
    THx for your reply. 
    I tried implementation with opt_design option -directive Explore, it didn't work.
    Aslo I disabled the opt_design and enabled phys_opt_design, it still has the same error in the implementation.
    I would try to delete the pins related  _AR_(like _arready, etc. ) and _R_ (like _rdata, _rid, etc.) pins in the AXI4 port in my IP. The data mover ip doesn't contains thes pins. I want  to see whether it works this way.
    Hopefully, the new vivado version will help. 

  • Move data from one cube to another cube

    Hi,
    I am on BW 3.5 and I have moved the data from one cube to another cube and found that the number of records in the original cube does not match to the newly created cube. for eg. if the original cube contains 8,549 records then the back up cube contains 7,379 records.
    Please help me on what I need to look on and if in case the records are getting aggregated then how do I check the aggregating record.
    Regards,
    Tyson

    Dear tyson m ,
    check with any update rules in ur transfer.If so check in it.
    Just go through these methods for making transfer from one cube to another cube fully without missing data.
    Update rules method
    if it's updated from ods, you can create update rules for cube2 and update from ods
    or you can try datamart scenario
    cube1 right click 'generate export datasource'
    create update rules for cube2, assign with cube1
    rsa1->source system->bw myself, right click 'replicate datasource'
    rsa1-> infosource -> search 8cube1 name
    (if not get, try right click root note 'infosource'->insert lost node(s)
    from that infosource, you will find assigned with datasource, right click and 'create infopackage', schedule and run.
    Copy from
    While creating the new cube give the cube name in the "Copy from" section. It would copy all the characteristics and Key figures. It would even copy the dimensions and Navigational attributes
    Another option is:
    The steps for copying the contents of one cube to another:
    1. Go to Manage -> Recontruct of the new cube.
    2. Select the "selection button"(red , yellow, blue diamond button).
    3.In the selection screen you can give the technical name of the old cube, requests ids you want to load, from & to date.
    4.Execute and the new cube would be loaded.
    Its all that easy!!!!!!
    Refer this link:
    Copying the structure of an Infocube
    Reward if helpful,
    Regards
    Bala

  • How to identify the data mismatch between inventory cube and tables?

    Hi experts,
    i have a scenario like how to identify the data mismatch between 0IC_C03 and tables,and what are the steps to follow for avoiding the data mismatch

    Hi
    U can use data reconcilation method to check the consistency of data between the r/3 and bw. Plz check the below link
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a0931642-1805-2e10-01ad-a4fbec8461da?QuickLink=index&overridelayout=true
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d08ce5cd-3db7-2c10-ddaf-b13353ad3489
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/7a5ee147-0501-0010-0a9d-f7abcba36b14?QuickLink=index&overridelayout=true
    Thanx & Regards,
    RaviChandra

  • Moving data to archive cube

    Hi Experts,
    We are extracting data from R/3 to BW.We are keeping 2years of Data in DSO1 and moving into Cube(R/3>DSO1>Cube1).we have 2007 and 2008 data in the DSO and Cube.We extracted 1999 to 2006 data in to History DSO  from R/3 and sent into InfoCube(Cube2).The flow as is follows
    Current Data:2007 and 2008
    R/3 -
    >DSO1--->Cube 1(Deltas are running)
    History data:1996 to 2006
    R/3>DSO2--->Cube2.
    Now I want to move 2007 data in to History data(History DSO and Cube).
    I have two options to get this job  done .
    1.Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2.
    2.Move selective data from Cube 1 to Cube 2.If So I can't see item wise data just I can see only Aggregated data in the cube.
    Is there any best approach other than two options.if not what would be the best between two options.
    Once I move the data into History cube I need to delete the data in Current DSO and Current Cube based on selective deletion.If I delete the data in DSO and Cube is there any impact on currrent Delta load.
    I need to do this every year becuase we want to keep only 2years of data in current data InfoCube..Could any one throw some light on the above issue.
    Thansks,
    Rani.

    Hi Rani.........
    Ur 1st Question....
    We are extracting data from R/3 to BW.We are keeping 2years of Data in DSO1 and moving into Cube(R/3>DSO1>Cube1).we have 2007 and 2008 data in the DSO and Cube.We extracted 1999 to 2006 data in to History DSO from R/3 and sent into InfoCube(Cube2).The flow as is follows
    Current Data:2007 and 2008
    R/3 -
    >DSO1--->Cube 1(Deltas are running)
    History data:1996 to 2006
    R/3>DSO2--->Cube2.
    Now I want to move 2007 data in to History data(History DSO and Cube).
    I have two options to get this job done .
    1.Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2.
    2.Move selective data from Cube 1 to Cube 2.If So I can't see item wise data just I can see only Aggregated data in the cube.
    Is there any best approach other than two options.if not what would be the best between two options.
    ANS I want to clear u one thing.....that in infocube data will not get Aggregated until u aggregate the Cube.........So u can see the data in cube item wise if u don't aggregate the cube.......
    Anyways...I think the best option is to follow the Flow.....Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2..........
    Secondly..U hav asked...
    Once I move the data into History cube I need to delete the data in Current DSO and Current Cube based on selective deletion.If I delete the data in DSO and Cube is there any impact on currrent Delta load.
    I need to do this every year becuase we want to keep only 2years of data in current data InfoCube..Could any one throw some light on the above issue....
    ANS Selective deletion of data is not going to effect delta loads...so u can do Selective deletion...
    Actually Delta load will get effected....somehow if anyone delete a delta request without making the QM status red.......then init flag will not be set back....and data will be lost...
    Hope this helps..
    Regards,
    Debjnai...

  • How to define join in physical layer between cube and relational table

    Hi
    I have a aggregated data in essbase cube. I want to supplement the information in the cube with data from relational source.
    I read article http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx which describes how to do it.
    From this article I gather that I have to define a complex join between the cube imported from essbase to my relational table in physical layer.
    But when I use Join Manager I am only able to define jooin between tables from relation source but not with the imported cube.
    In My case I am trying to join risk dimension in the cube based on risk_type_code (Gen3 member) with risk_type_code in relation table dt_risk_type.
    How can I create this join?
    Regards
    Dhwaj

    Hi
    This has worked the BI server has joined the member from the oracle database to cube. So Now for risk type id defined in the cube I can view the risk type code and risk type name from the relational db.
    But now if I want to find aggregated risk amount against a risk type id it brings back nothing. If I remove the join in the logical model then I get correct values. Is there a way by which I can combine phsical cube with relational model and still get the aggregated values in the cube?
    I have changed the column risk amount to be sum in place of aggr_external both in logical and phsical model.
    Regards,
    Dhwaj

  • Can I store only the aggregate data in OLAP cube

    Hi All,
    I know that the OLAP cubes store the leaf data and then builds the aggregate data on top of it and stores with in it. I have huge amounts of data ( like billions of data in my fact table and 6-8 dimension tables). If you keep the leaf data along with the agg data with in the cube would be too large to build.
    So I am thinking to have to store only the agg data within the OLAP cube and for the leaf data it should still read from the Relational tables. Something like Hybrid OLAP.
    (what I mean is
    1. create the dimensions and cube in the AWM on 11g.
    2. I will also specifiy the levels that I want the agg data to be calculated and stored in the OLAP cube.
    (what I want is
    1. I want to store only the agg data in the cube and leaf data still in the relatlional tables.
    2. when I read the cube and drill down to the leaf level , it should fetch the leaf level data.
    Is this possible in Oracle OLAP, if yes please provide some pointers
    Thanks
    S

    First you should try out storing and aggregating data to atleast see if the cube-loading time, query-time and AW-size is within acceptable limit or not. 11gOLAP especially 11gR2 OLAP is very efficient on all three fronts.
    Regarding specifying levels, you can either use Cost-based-aggregation and pick the %age that should be pre-aggregated OR you can use Level-based-aggregation and pick the levels from each dimension that should be pre-aggregated.
    Try these out before venturing into anything else. You will be surprised by the results. There are other ways to store the data in smaller multiple-cubes and joining those cubes through formulas. Make sure you don't mistake an attribute for a dimension.
    Not sure what you mean by just storing aggregated data in olap cubes. You can just do a SUM .. GROUP BY .. of relational data before loading it into olap cubes. For example, if you source data is at DAY level, you can SUM.. GROUP BY .. at MONTH-level and then load month-level data into olap cubes where leaf-level is month-level.
    The sql view (used by reporting tools) could then be a join between month-level "olap" data and "day-level" relational data. When you are drilling-down on the data, the sql view will pick up data from appropriate place.
    One more thing. Drill-Thru to detail data is a functionality of reporting tools also. If you are using OBIEE or Discoverer Plus OLAP, then you can design the reports in a way that after you reach the olap leaf-level, then it will take the user to a relational detail-report.
    These are all just quick suggestions (on a Friday evening). If possible, you should get Oracle OLAP Consulting Group help, who can come up with good design for all your reporting needs.

Maybe you are looking for

  • Can no longer use my selection tools - help!

    while cutting, copying and pasting, I accidentally clicked on something in the top bar that caused all of the tools that are in the left-hand side (default) of my Photoshop CS 4 workspace (and the picture) to remain a black pointed arrow which is the

  • WM_TOUCH on Windows 7 Delayed

    I am trying to work with Windows 7 and a Hantouch touch screen. I have a few issues that are eluding me at the moment. 1) Windows 7 will translate for me the touch action into a mouse left button down action but only after it pauses to see if I inten

  • BO data services: Profiler cannot connect. error is - No JVM

    Hi, I just installed BODS on solaris. I had configured the job servers and repositories and my designer can connect to the job server. However, my designer cannot connect to the profiler. I'm having an error message of 'no JVM is available to process

  • Thunderbolt-VGA problem

    Hi, I have a MacBook Air 2013 and have recently bought a Thunderbolt-VGA Adapter. However when I connect it to the beamer which I use at work (Panasonic), nothing happens - the blue screen projected by the beamer just remains there! I have then tried

  • CVS in JDeveloper

    I have CVS set up in JDeveloper. In JDeveloper, when I select a file that is already in CVS, the only Source Control options it gives my are: Update and Commit. The CVS command I need is Edit. How do I lock or edit individual files? Thanks Travis