Why we will go for Remote cube

Hi experts,
Why we will go for Remote cube instead of Basic cube, What are the advantages of Remote cube ? At what circumstances we wii go for Remote cube ?

Hi,
the reason is.
Use SAP RemoteCubes if:
·        You need very up-to-date data from an SAP source system
·        You only access a small amount of data from time to time
·        Only a few users execute queries simultaneously on the database.
Do not use SAP RemoteCubes if:
·        You request a large amount of data in the first query navigation step, and no appropriate aggregates are available in the source system
·        A lot of users execute queries simultaneously
·        You frequently access the same data
link:
http://help.sap.com/saphelp_nw04/helpdata/en/da/5909392a430303e10000000a114084/frameset.htm
Regards,
Senthil

Similar Messages

  • Why we will go for Queue delta instead of Unserialized and Direct delta ?

    Hi Experts,
    Why we will go for Queue delta instead of Unserialized and Direct delta ? specify any reasons for that ?
    What happens internally when we use Queue delta , Direct delta ?
    I will allocate points to those who help me in detail. My advance thanks who respond to my query.

    Hi,
    Direct Delta
    With this update mode, extraction data is transferred directly to the BW delta queues every time a document is posted. In this way, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues. If you are using this method, there is no need to schedule a job at regular intervals to transfer the data to the BW delta queues. On the other hand, the number of LUWs per DataSource increases significantly in the BW delta queues because the deltas of many documents are not summarized into one LUW in the BW delta queues as was previously the case for the V3 update.
    If you are using this update mode, note that you cannot post any documents during delta initialization in an application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW. Otherwise, data from documents posted in the meantime is irretrievably lost. The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    Queued Delta
    With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered using the logistics extract structures Customizing cockpit are used during this scheduling. This scheduling is carried out with the same report which is used when you use the V3 updating (RMBWV311, RMBWV312 or RMBWV313).There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP (e.g OLI7BW, OLI8BW or OLI9BW), provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the extraction queues when you resume posting of documents.
    Using transaction SMQ1 and the queue names MCEX11, MCEX12 or MCEX13 you can get an overview of the data in the extraction queues.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the respective target system using the RMCSBWCC check report.
    In the following cases, the extraction queues should never contain any data:
    - Importing an R/3 Support Package
    - Performing an R/3 upgrade
    For an overview of the data of all extraction queues of the logistics extract structures Customizing Cockpit, use transaction LBWQ. You may also obtain this overview via the "Log queue overview" function in the logistics extract structures Customizing cockpit. Only the extraction queues that currently contain extraction data are displayed in this case.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a document) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    Un-serialized V3 Update
    Note: Before PI Release 2002.1 the only update method available was V3 Update. As of PI 2002.1 three new update methods are available because the V3 update could lead to inconsistencies under certain circumstances. As of PI 2003.1 the old V3 update will not be supported anymore.
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update); the data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    Thanks,
    JituK

  • Is extractor 0FI_GL_4 (line items GL) available-compatible for remote cube?

    Hi,
    Is 0FI_GL_4 available-compatible for Remote cube? I changed parameter virtcube from space to '1' value. But when I regenerate DS in RSA6 virtcube parameter-value comes back to space. In RSA6 I see value 'D' (direct access) always.
    Thanks

    We donu2019t have ehp3, so i cant use 0FI_GL_14.
    Can we really compare 0FI_GL_14 and 0FI_GL_10  ?
    Because 0FI_GL_10 extracts not on document level but I need information (Work breakdown structure element(PROJK)) on document level.
    So I need both fields in extract structure u201CAccounting document number (BELNR)u201D and u201CWork breakdown structure element(PROJK)u201D.
    0FI_GL_10 contains none of them.

  • How to make use of 3EC_CS_1R for remote cube.

    hi all,
    I have activated the 3EC_CS_1A for consolidation reports but would like to use 3EC_CS_1R DataSource for remote cube, we have not installed SEM-BCS on the BI system, when trying to use the 3EC_CS_1R on the BI system, it complaints the DataSource can only be used by SEM-BCS. what is the best way to utilise this DataSource?
    do I need to install SEM-BCS in order to use this DataSource? Can I just install BCS component only?
    I am slightly confused as where SEM-BCS should be installed, on BI or ECC system?
    cheers.
    Message was edited by: Joe
            Joe Wong

    no longer relevant

  • Transformations for Remote cube?

    hi experts,
    can we create Transformations for Remote Cube?
    plz provide  me the steps to create Remote Cube.
    regards
    venuscm

    HI,
    Yes you can create transformations for remote cube.Remote cube is similar to standard cube but while creation select radio button for remote /virtual cube .
    Steps for creating remote cube, chk the help link-
    http://help.sap.com/saphelp_nw04/helpdata/en/da/5909392a430303e10000000a114084/frameset.htm
    Hope this helps.
    thanks,
    Rahul

  • Getting unkown  errors while running the listcube for remote cube

    Hi we r trying to run the listcube for remote cube .We are trying to access the oracle data by creating views  and pulling it into bw using  remote cube through UDConnect
    We r getting uknown errors  like  below
    Messages for DataSource 6BTV_HISTORY_FXD_CST1 from source system DB2CX
    System error: Message type   is unknown.                             
    Error reading the data of InfoProvider ZR_CHFCB 
    If any one face the similar issue please reply me back .

    Remote cube doesn´t store data in BW....it only access data from remote system and shows in a report.....the listcube  TX need to find a data table in order to show the information but in this case tha remote cube doesn´t have anything.....
    Regards

  • How to Create Process Chain for Remote Cube.

    Hi,
              I created Remote info cube, data from ECC, loaded Successfully, I want to create Process Chain for Reomte Cube, Please Help me.
    Thanks,
    Nandish Gowda

    Hi Nandish,
    After initialization, th next delta will come as per your delta nature, if it is 0calday: Then your next delta will come tomorrow only.
    If you are able to 7 records in RSA3, why not in PSA( Why it's only 5 records). Analyze this closely.
    Sometimes, RSA3 exactly doesn't match with PSA because, RSA3 works on update mode F.
    Regards,
    Suman

  • When we have LSMW for migrating data then why we will go for Session/Call ?

    Hi Guru's,
    Could you please tell me ...
    When we have LSMW for migrating data , why will go for Sessions/call Transaction for migrating? when we do with LSMW we can complete the object with less time then why we have will do with session/call transaction?
    when we have to use LSMW ? and when we have to use SESSION/ CALL TRANSACTION ..
    thanks in advance..
    vardhan

    LSMW can't upload large amount of data into the database.
    Whereas BDC SESSION CALL /TRANSACTION method can upload large amount of data into the database.
    The error capture method is superior in BDC.
    BDC programs can be scheduled to run on periodic basis as per the customer requirements.

  • Function module for Remote cube

    Hi all
    I just want to know, does the remote cube works for the datasource which is based on function module.
    Regards
    Preeti

    Preeti,
    the Remote cube  works on any type of datasource .as this datasource is already replicated into BI system then it is a BI datasource only.
    but at the time of extraction of data,the datasource fetch the data from the respective tables specified in the Function module.by applying the logic to the data.not to the fields.
    so the logic applies only to data not to fields.

  • Error in ListCube for Remote cube - Error in substep

    Hi All,
    To start with the details, I've 7.0 version datasource in APO and I've created Remote cube / Virtual provider Based on Data Transfer Process for Direct Access with transformation and some routines in rules.
    I've also an "End Routine" as shown below
    DATA RSLT_PKG_ROW TYPE  tys_TG_1.
    LOOP AT RESULT_PACKAGE INTO RSLT_PKG_ROW.
    RSLT_PKG_ROW-/BIC/ZTEST_QTY = 2.
    MODIFY RESULT_PACKAGE FROM RSLT_PKG_ROW.
    ENDLOOP.
    My list cube runs fine and gives the required data without the above end routine for ZTEST_QTY.
    when I add the above code in my end routine for populating data for ZTEST_QTY, it is throwing an error message
    Error reading the data of InfoProvider
    Error in substep.
    someone please throw some light on this.
    Thanks
    KI

    Modify your code to this:
    CONSTANTS: c_2(1) TYPE n VALUE 2.
    LOOP AT result_package
      INTO <result_fields>.
      <result_fields>-/bic/ztest = c_2.
    ENDLOOP.
    Since the FIELD-SYMBOLS of <result_fields> has been automatically declared in the End Routine, there's no need to define a work area and modify the internal table with the data from the work area.

  • BAPI's for remote Cube very urgent

    Hi,
       I want to activate the cube 0FIGL_R10 and 0TRCM_RC1. Can anyone guide me what needs to be done. I jave activated the cube from Business Content. When I try to execute the query it is not running .
    please help as it is very urgent for me ..
    Regards
    Baljit Singh

    Hve a look at this:
    Need step by step process for creating remote cube
    Regards

  • Error reading the data from the remote cube

    Hi all,
    When we try to get the data for remote cube from LISTCUBE, we are getting the following msg.
    1) Messages for DataSource 9AUPA_DP_HK_01 from source system AD1CLNT100
    2) 224(/SAPAPO/TSM): No planning version selected
    3) Errors have occurred while extracting data from DataSource 9AUPA_DP_HK_01
    4) Error reading the data of InfoProvider UICHKRMTC.
    Any Inputs?

    Hi,
    Check whether the sourcesystem is responding.
    And also in the error mesg: 224(/SAPAPO/TSM): No planning version selected
    It seems you have not selected any planning version. Give any planning version in the listcube selection screen and execute.
    Regards,
    Ravi Kanth

  • Source system Assignment for Virtual Cube

    HI,
    I transported infosource and transfer rules to production, but source system is not transported. if i am right i think i need to assign source system(prd) manually. but in prd all objects are not changeble, then how can i assign the source system to infosouce.
    Regards
    Rajini

    HI,
    I transported infosource sucessfully to production , but when i am trying to transport transfer rules , giving warning source system is not transportable. this is happening with remote cube only. all other normal transports are fine.
    i think there is a special way to transport TR for remote cube. pls. help me.
    Rajini

  • Authorization for running report based on Remote Cube

    Hi ,
    I have created a report based on Remote cube in BI 7.0
    I have created a DTP with Direct access to load the Remote cube.
    While running the report based on Remote cube , I am getting error
    'You are not authorized for the DTP '
    The report is running only when I assign the '*' value  to the authorization objects for the role in which the report is stored:
    S_RS_DTP = * (For DTP)
    S_RS_TR = *   (For Transformation)
    S_RS_DS = * (For datasource)
    I want to restrict the values to the objects related to my report only rather than * which means ALL.
    Please help what values to assign for these authorization objects for the Role.
    Thanks.

    Hi,
    You need to make necessary settings in RSECADMIN for the Authorization Objects. The default values will be * . You have to customize them to suit your requirement.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/101fb4f5-eb7c-2c10-5daa-b479c47f0a14
    Regards,
    Suman

  • Steps for creating a SAP Remote cube.

    Hello Experts,
    Can you anyone list the steps for creating a SAP Remote cube please?
    Any step by step guide will be of really great help.
    Thanks in advance.
    Regards,
    Kumar.

    Creating VirtualProviders Based on Data Transfer Processes Locate the document in its SAP Library structure
    Prerequisites
    If you are using a DataSource as the source for a VirtualProvider, you have to allow direct access to this DataSource.
    Procedure:
           1.      In the Data Warehousing Workbench under Modeling, choose the InfoProvider tree.
           2.      In the context menu, choose Create VirtualProvider.
           3.      As the type, select VirtualProvider based on data transfer process for direct access.
    In terms of compatibility, a VirtualProvider that is based on a data transfer process with direct access can also be connected to an SAP source system using a 3.x InfoSource.
    The Unique Source System Assignment indicator controls whether this source system assignment needs to be unique. If the indicator is set, you can select a maximum of one source system in the assignment dialog. If the indicator is not set, you can select multiple source systems. In this case, the VirtualProvider acts like a MultiProvider.
    If the indicator is not set, characteristic 0LOGSYS is automatically added to the VirtualProvider when it is created.  In the query, this characteristic allows you to select the source system dynamically: In each navigation step, the system only requests data from the assigned source systems whose logical system name fulfills the selection condition for characteristic 0LOGSYS.
           4.      Define the VirtualProvider by transferring the required InfoObjects. Activate the VirtualProvider.
           5.      In the context menu of the VirtualProvider, select Create Transformation. Define the transformation rules and activate them.
           6.      In the context menu of the VirtualProvider, select Create Data Transfer Process. DTP for Direct Access is the default value for the DTP type. Select the source for the VirtualProvider. Activate the data transfer process. See Structure linkCreating Data Transfer Process for Direct Accesss.
           7.      Activate direct access. In the context menu of the VirtualProvider, select Activate Direct Access. In the dialog box that appears, choose one or more data transfer processes and select This graphic is explained in the accompanying text Save Assignments.
    Check this for little extra info:
    /thread/142088 [original link is broken]

Maybe you are looking for