Remote cube: bad performance of query in source system

I have created a remote cube that looks to a simple R/3 view VBAK/VBAP.
In a query on this cube I select document date. on the VBAK there is a special index on document date.
It appears that my date variable is not included in the SQL statement that is executed on the sourcesystem. This means that on VBAK/VBAP always a full table scan occurs.
The query on the source system (as generated by executing my BW query) looks like this. (the document-date = AUDAT):
SELECT
  "MANDT" , "VBELN" , "POSNR" , "VBTYP" ,
  "AUART" , "AUDAT" , "VKORG" , "VTWEG" ,
  "SPART" , "VKGRP" , "VKBUR" , "NETWR" ,
  "WAERK" , "NTGEW" , "GEWEI" , "KWMENG" ,
  "VRKME" , "SHKZG"
FROM
  "ZBW_2LIS11ITM"
WHERE
  "MANDT" = :A0#
My question is: Is it normal that for a remote cube always full table scans are executed. This appears to me a waste of system resources. Or is it a bug?

Hi,
if you are using ABAP routines for populating fields of your RemoteCube, you should provide 'inverse routines' as well. According to SAP documentation, this allows for applying filters in the source system rather than doing full table scans.
An example can be found here: [http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/f5af4a4db53482e10000000a1553f6/content.htm]
Best regards
Christian

Similar Messages

  • BAdI implementation FIAA_BW_DELTA_UPDATE inactive in source system DEV

    Hi all.
    I'm trying to execute an infopackage to get 0ASSET (texts delta initialization). But it shows me the next error message: "BAdI implementation FIAA_BW_DELTA_UPDATE inactive in source system DEV".
    But when I look at this BADI implementation in the source system (Transaction code se19) there are no errors on it.
    "BAdI implementation FIAA_BW_DELTA_UPDATE does not contain any errors".
    Do somebody knows what's happenning?

    Hi,
    If you are in version 3.1, then this BADI has an issue.
    Kindly refer to the following note:
    [SAP Note 590034 Deactivating implementation FIAA_BW_DELTA_UPDATE|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bct/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d353930303334%7d]
    Cheers...

  • API to access query structure / bad performance Bex query processor

    Hi, we are using a big P&L query structure. Each query structure node selects a hierarchy node of the account.
    This setup makes the performance incredible bad. The Bex query processor caches and selects per structure node - which creates an awful mass of unnecessary SQL statements. (It would be more useful to try to merge the SQL statements as far as possible with an group by account to generate bigger SQL statements.)
    The structure is necessary to cover percentage calculations in the query, the hierarchy is used to “calculate” subtotals by selecting different nodes on different levels.
    I am searching now for a different approach to cover the reporting requirement - or - for a API to generate out of the master structure smaller query structures per area of the P&L. It there any class to access the query structure?
    We tried already to generate data entries per node level (duplicating one data record per node where it appears with an characteristic for the node name). But this approach generates too many data records.
    Not using hierarchy nodes would make the maintenance terrible. To generate "hard" selections in the structure out of the hierarchy an API to change the structure be also useful.

    The problem came from a wrong development of exit varibale used in Analysis Authorization
    Edited by: SSE-BW-Team SSE-BW-Team on Feb 28, 2011 1:46 PM

  • Bad performing update query - converting from sqlserver to oracle

    SQLSERVER UPDATE
    =================
    UPDATE
    netVIEWplus.dbo.DIM_OUC_Latest
    SET
    OUC = LatestDIM.OUC,
    OUC_Desc = LatestDIM.OUC_Desc,
    OUC_Level = LatestDIM.OUC_Level,
    Parent_OUC = LatestDIM.Parent_OUC,
    CC_Type = LatestDIM.CC_Type,
    GFR = LatestDIM.GFR,
    CORP = LatestDIM.CORP,
    SOB = LatestDIM.SOB,
    Div_Unit = LatestDIM.Div_Unit,
    Div_Desc = LatestDIM.Div_Desc,
    L1_OUC = LatestDIM.L1_OUC, L1_DEPT_DESC = LatestDIM.L1_DEPT_DESC,
    L2_OUC = LatestDIM.L2_OUC, L2_DEPT_DESC = LatestDIM.L2_DEPT_DESC,
    L3_OUC = LatestDIM.L3_OUC, L3_DEPT_DESC = LatestDIM.L3_DEPT_DESC,
    L4_OUC = LatestDIM.L4_OUC, L4_DEPT_DESC = LatestDIM.L4_DEPT_DESC,
    L5_OUC = LatestDIM.L5_OUC, L5_DEPT_DESC = LatestDIM.L5_DEPT_DESC,
    L6_OUC = LatestDIM.L6_OUC, L6_DEPT_DESC = LatestDIM.L6_DEPT_DESC,
    L7_OUC = LatestDIM.L7_OUC, L7_DEPT_DESC = LatestDIM.L7_DEPT_DESC,
    L8_OUC = LatestDIM.L8_OUC, L8_DEPT_DESC = LatestDIM.L8_DEPT_DESC,
    Current_Flag = LatestDIM.Current_Flag,
    INF_Div_Unit_Only = LatestDIM.INF_Div_Unit_Only,
    INF_DIM_OUC_Id_Used = LatestDIM.DIM_OUC_Id
    FROM
    netVIEWplus.dbo.DIM_OUC_Latest
    INNER JOIN
    netVIEWplus.dbo.DIM_OUC HistoryDIM
    ON
    HistoryDIM.DIM_OUC_Id = netVIEWplus.dbo.DIM_OUC_Latest.DIM_OUC_Id
    INNER JOIN
    netVIEWplus.dbo.DIM_OUC LatestDIM
    ON
    LatestDIM.DIM_OUC_ID = HistoryDIM.Latest_Id
    Oracle Version
    ===========
    UPDATE
    netVIEWplus.DIM_OUC_Latest T1
    SET (OUC,OUC_Desc,OUC_Level,Parent_OUC,CC_Type,GFR,CORP,SOB,Div_Unit,Div_Desc,L1_OUC,L1_DEPT_DESC,L2_OUC,
    L2_DEPT_DESC,L3_OUC,L3_DEPT_DESC,L4_OUC,L4_DEPT_DESC,L5_OUC,L5_DEPT_DESC,L6_OUC,L6_DEPT_DESC,L7_OUC,L7_DEPT_DESC,
    L8_OUC,L8_DEPT_DESC,Current_Flag,INF_Div_Unit_Only,INF_DIM_OUC_Id_Used) =
    ( SELECT LatestDIM.OUC OUC,LatestDIM.OUC_Desc OUC_Desc,
    LatestDIM.OUC_Level OUC_Level,LatestDIM.Parent_OUC Parent_OUC,LatestDIM.CC_Type CC_Type,LatestDIM.GFR GFR,
    LatestDIM.CORP CORP,LatestDIM.SOB SOB,LatestDIM.Div_Unit Div_Unit,LatestDIM.Div_Desc Div_Desc,
    LatestDIM.L1_OUC L1_OUC,LatestDIM.L1_DEPT_DESC L1_DEPT_DESC,LatestDIM.L2_OUC L2_OUC,
    LatestDIM.L2_DEPT_DESC L2_DEPT_DESC,LatestDIM.L3_OUC L3_OUC,LatestDIM.L3_DEPT_DESC L3_DEPT_DESC,
    LatestDIM.L4_OUC L4_OUC,LatestDIM.L4_DEPT_DESC L4_DEPT_DESC,LatestDIM.L5_OUC L5_OUC,
    LatestDIM.L5_DEPT_DESC L5_DEPT_DESC,LatestDIM.L6_OUC L6_OUC,LatestDIM.L6_DEPT_DESC L6_DEPT_DESC,
    LatestDIM.L7_OUC L7_OUC,LatestDIM.L7_DEPT_DESC L7_DEPT_DESC,LatestDIM.L8_OUC L8_OUC,
    LatestDIM.L8_DEPT_DESC L8_DEPT_DESC,LatestDIM.Current_Flag Current_Flag,
    LatestDIM.INF_Div_Unit_Only INF_Div_Unit_Only,LatestDIM.DIM_OUC_Id INF_DIM_OUC_Id_Used
    FROM
    netVIEWplus.DIM_OUC HistoryDIM,netVIEWplus.DIM_OUC LatestDIM
    where
    HistoryDIM.DIM_OUC_Id = T1.DIM_OUC_Id
    and
    LatestDIM.DIM_OUC_ID = HistoryDIM.Latest_Id and rownum=1)
    where exists ( SELECT 1
    FROM
    netVIEWplus.DIM_OUC HistoryDIM,netVIEWplus.DIM_OUC LatestDIM
    where
    HistoryDIM.DIM_OUC_Id = T1.DIM_OUC_Id
    and
    LatestDIM.DIM_OUC_ID = HistoryDIM.Latest_Id )
    Problem is oracle update is taking long time than sqlserver update. Can it be written in some orhet way.
    Regards,
    Koushik

    Hi,
    Have you gathered stats as well ? Did the query use index(es) as well ?
    Without any more info like explain plan, indexes, etc., further help will be unable.
    Nicolas.

  • Strategy for ensuring replication is complete before initating query from source system

    Hi,
    I am using HANA as a side-car scenario with reports running in SAP ECC being accelerated by querying replicated tables in SAP HANA instead. This works well, however I don't have a good mechanism to validate before running a report whether the underlying data has already been replicated to HANA or is still queued up.
    Often users would want to run the reports soon after large data changes have been made in the source tables. It is unknown, based on the overall workload, how long it might take for the most recently written records to get replicated to the corresponding HANA table.
    What is a good practice approach to handle this. I have seen some separate threads on doing record counts between HANA and ECC. I think that is not a good idea at all. Firstly, for large tables, the time overhead of doing the record count is very large. In the time it takes for me to query the record count in ECC, the HANA report could have run 10 times over. But more importantly, for very large source tables, I may have opted to only replicate more recent data, leaving old historical data from ECC un-replicated to HANA.
    I know this is not a new problem. SAP must have already addressed it a number of ways for their own delivered application accelerators. The COPA accelerator for instance must be doing something along these lines. Possibly querying most recent records in ECC and comparing them to the most recent records in HANA for the same tables might be a way to go.
    Does anyone else have insights into how to best approach this? Does SLT expose a mechanism to check whether replication is completed for any given table?
    thanks,
    Nitin Goel

    Hi Nitin,
    SLT can never confirm whether replication is completed for any given table..Replication is a continous process,if records are there in the logging table of ECC then that will be replicated via SLT.
    So the best way to check Replication completion is :
    1. Go to SE16(ECC)--give the Logging table for which you want to check the replication.Copy the Logginng table from LTRC.If the number of entries is 0,then nothing is there in the logging table.
    You can verify the replication in LTRC(SLT)--Expert functions to check replication is working fine or not.
    2. Then go toi SE16(ECC)--Check the number of entries of the original table
    3. Check in HANA--The number of entries should be same as the source table.
    It takes a minute to verfify each table replication,not more than that.
    Regards,
    Joydeep.

  • Data flow not visible while creating remote cube.

    Hi SDN,
    I am working on a remote cube. I need to link the infosource to remote cube. I have  selected the source system and assigned.
    I have done,  like--Remote cube --> Context Menu --> Show Data flow , and I wanted to see the source system , Info source and the remote cube, but could not find them .
    Guide me what I've missed /went wrong.
    Thanks in Advance
    Ankit

    Hi,
    Remote cube is technology where you report on data which is stored in the source system. While creating the Infocube you have to select Remote Infocube Radiobutton and in the next screen it will ask you for the infosource.
    In that infosource you can assign the datasource in the infosource menu.
    Now you see the data flow.
    Hope it works,
    Regards,
    Sasi

  • BI 7.0: Source System upgrade from R/3 Enterprise to ECC 6.0

    Background:
    I am relatively new to BW team and will be going through my 1st source system upgrade.
    We currently have BI 7.0 SPS 17 connected to source system R/3 Enterprise EP1.
    We are upgrading to the source system to ECC 6.0.
    In Development and QA Environments:
    we will have access to both old (R/3 Enterpirse) and the new (ECC 6.0) source systems.
    We have a opportunity to compare the BW Objects, data flow and data loading from
    both source systems.
    In Production, however, we will just upgrade over 3 days downtime.
    One Question that comes to mind in this regard...
    What sort of things, we should be checking for before and after upgrade in Dev/QA and in Prod environments and what tools are available that can help the analysis and validation process ?
    Some of the suggestions that were given to me include following points.
    Before Upgrade:
    1 Check data, taking pre images of it for testing with post upgrade.
    2. Perform proper analysis of Source system related BW objects.
      - A complete listing of actively used Datasources,.. etc.
    3. Make delta queues empty,
    Make sure that all existing deltas are loaded into BW.The last request delta must not deliver any data and must be green in the monitor.
    4. Stop Process chains from BI (remove from schedule) and collection runs from R/3 side
    After Upgrade:
    1 After the OLTP upgrade, refer to note 458305 and other notes that are relevant to the actual upgrade
    (depending on the R/3 system release / R/3 plug-in to BI plug-in compatibility).
    2. Check logical system connections. Transactions SMQS, RSMO and SM59, If no access, then we can use Program RSRFCPIN_NEW for RFC Test.
    3. Check and/or Activate control parameters for data transfer. SBIW ---> General Settings --->
    Maintain Control Parameters for Data Transfer
    http://help.sap.com/saphelp_nw70/helpdata/EN/51/85d6cf842825469a51b9a666442339/frameset.htm
    4. Check for Changes to extract structures in LBWE Customizing Cockpit 
        - OSS Notes 328181, 396647, 380078 and 762951
    5. Check if all the required transfer structures are active. See OSS Note 324520 for mass activation.
    7. Check if all Source system related BW Objects are active - Transfer rules, Communication rules, update rules,DTPs,..etc. 
    Below is a link for some useful programs in this regard.
    https://www.sdn.sap.com/irj/scn/wiki?path=/pages/viewpage.action&pageid=35458
    6. Test all important datasources using RSA3 and check for OLTP Datasources changes .
    As soon as BW notices that the time stamp at the DataSource in the OLTP is newer than the one in the transfer structure, it requests replication of the DataSource and activation of the transfer structure. Transfer the relevant DataSources only if required, and transfer only the ones that have changed (RSA5 -> Delta).
    7. Create Data flow objects (trnasfer rules, infopackages, trnasformations, DTPs) for the Replicate
    new/changed datasources,if needed. 
    8. Check all CMOD enhancements.
    If we are using a customer exit with the extractor in the OLTP, see Note 393492.
    7. Check for unicode (all custom programs or Function Modules for DataSources)
    5.  Check all the queues in RSA7, start delta runs and test data consistency.
    For delta problems:In the BW system, start the 'RSSM_OLTP_INIT_DELTA_UPDATE' program for the DataSource and the source (OLTP) system for which the init selections are then transferred from BW into the ROOSPRMSC and ROOSPRMSF tables in the source system so that the delta can continue.
    9. Take back up of data posted during upgrade for contingency planing.
    10. Run the entire set of process chains once if possible and let it pick up no data or the usual master data.
    Since we have lot of experts in this forum, who have probably gone through such scenario many times, i wanted to request you to Please Please advise if i have stated anything incorrectly or if i am missing additional steps, OSS Notes, important details...

    Thanks Rav for your detailed post and very helpful contribution by posting all the information you had regarding the upgrade.
    We have similar scenario -
    We are upgrading our source system from 4.7 to ECC 6.0. We have our BI system with BI 7.0 at support pack 19 (SAPKW70019).
    Our strategy in ECC deployment  ->
    In development we copied our old DEV 4.7 system DXX to new ECC system DXY (new system ID).
    In production we are going to use same system PRXX upgraded from 4.7 to ECC.
    Now we are in testing phase of ECC with all interfaces like BI in Dev.
    My questions are below ->
    How can we change/transfer mapping of all our datasources in Dev BW system BID to new ECC dev system DXY (Eg. Logical system LOGDXY0040) from Old dev system DXX (Eg. Logical system LOGDXX0040). We donot want to create new datasources or change all transfer rules/Infosources for old BW3.x solutions on our BI.
    Also in new ECC sourcesystem copy  we see all datasources in Red in RSA7 transaction. Do we need to initialize again all the datasources from our BW BID system.
    Is there any easy way for above scenario ?
    Please let me know if you have any further helpful information during your ECC 6.0 upgrade connecting to BI 7.0 system.
    I have found some other links which have some pieces of information regarding the topic -
    Upgrade R/3 4.6C to ECC 6.0 already in BI 7.0
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/sap-r3-migration-and-sap-bw-version-1744205
    BI 7.0: Source System upgrade  from R/3 Enterprise to ECC 6.0
    Re: ECC 6.0 Upgrade
    Re: Impact of ECC 6.0 upgrade on BI/BW Environment.
    ECC 5.0 to ECC 6.0 upgrade
    Thanks
    Prasanth

  • Error Infosource is not defined in the source system

    I am extracting data into ODS. It was running perfectly till now. But today i am getting this error.and for all packages i run. whether it is transaction data or master data.
    I replicated the DataSource and activated all the objects and transfer structure.but still getting the same error.
    I even created a new DataSource and infosource but still the error persists. please help me in this matter.
    Thanks

    the data is extracted from BW system itself (Self System).
    there are three tables linked to each other.I have created an infoset query and creted datasource on that infoset query.
    the source system is the same bw system.
    The detailed error report is -
    InfoSource  is not defined in the source system.
    Message no. R3005 
    Diagnosis
    The InfoSource  specified in the data request, is not defined in the source system.
    System response
    The data transfer is terminated.
    Procedure
    In the Administrator Workbench of the Business Information Warehouse, update the metadata for this source system, and delete the InfoPackages belonging to InfoSources that no longer existing .

  • Query performance on Multiprovider(Remote Cube)

    Hi All,
    I have to increase the query performance for a report which built on Multi provider.
    This multiprovide designed from several remote cubes,but for this report data will bring through one remote cube from R/3.
    In filter i had one remote cube, which bring data from R/3.
    Now in ST03 the stats are like
    %init Time - 0, %DB time - 0, %OLAP time - 16.67, %Front end - 83.33.
    Now i have to improve the %Front end lapsed time.
    Could you please guide me.
    Thanks
    Srinivas

    Hi Srinivas,
    Please see this document
    https://websmp105.sap-ag.de/~sapidb/011000358700001394912002
    And this Discussion Thread
    Re: Deactivate Hierarchy symbols in excel
    See whether this is helpful in case of Remote Cubes.
    Thanks
    CK

  • Problem with a query on Remote Cube

    Hi,
    We are working on Remote cube, which has a source from a view built on R/3 base table. I need to extract the data to BW based on a current date due to huge volume of data(performance reasons) in the table. I have used an exit on R/3 to restrict to current date but the extract checker was showing the valid data ie only for current date when i had built a query on Remote cube, the Report was showing complete data(restriction not working). We have even tried using an inversion routine in transfer rules to pass the selections to the Source system even then it doesn't work. Could you help if you have come across same kind of situation or you can suggest an alternate solution on the same but we have to use Remote cube.
    Any suggestions asap would be highly appreciated and rewarded with full points.
    Regards,
    Raj

    Could you check the BLOB really contains UTF-8 encoded XML?
    What's your database character set?The BLOB contains UTF-8 Encoded
    and the database where i am connectes have AL32UTF8 character set, but my internal instance have "AMERICAN_AMERICA.WE8ISO8859P1"
    that is a problem?
    How could I change the character set of the oracle local client to the character set of the remote oracle data base?

  • Help required for a Remote cube query

    Hi,
    We are working on Remote cube, which has a source from a view built on R/3 base table. I need to extract the data to BW based on a current date due to huge volume of data(performance reasons) in the table. I have used an exit on R/3 to restrict to current date but the extract checker was showing the valid data ie only for current date. When i had built a query on Remote cube, the Report was showing complete data(restriction not working). We have even tried using an inversion routine in transfer rules to pass the selections to the Source system even then it doesn't work. Could you help if you have come across same kind of situation or you can suggest an alternate solution on the same but we have to use Remote cube.
    Any suggestions asap would be highly appreciated and rewarded with full points.
    Regards,
    Raj

    I can think of two ways to do it
    Simple with no ABAP coding is a view
    Create a view between a timestamp table and your big table
    Put an entry into your timestamp table of the current date - then use a selection of this inside the view
    Unforutunately you cannot use SY fields inside database views (otherwise you could have used these as a selection condition in the view)
    The best way to do it is using a function module and passing the data from the query into the SQL statement
    I prefer to do it the last way and also pass data through a generic structure - thus you manipulate the data inside the intial loop in the function module and don;t utilise further loops downstream in transfer rules
    (to try and keep the reponse time down)

  • Remote Cube Performance

    Hello BW Experts,
    1)  Is it advisibe to use remote cubes ( not for the user purpose, but for internal data reconciliation purpose). ?
    2) tables we are looking at are the vbak, vbap, vbrk, vbrp from SAP and comparing them to the base Sales and billing ODS. And produce a reconciliaiton report. Since the data in the source tables vbap, vbrp is huge, it gives memory dumps. Is there any better way of using this. Any suggestion to get solve the memory dump?
    3) Are there any measures to improe remote cube performance
    4) Other than the remote cube is there any other way of doing the reconciliation for the above data
    Any suggestions appreciated.
    Thanks,
    BWer

    Hi BWer,
    Remote cube performance is really an issue.
    Instead, you can load data upto PSA and compare them.
    There is a "How to... Validate InfoCube Data By Comparing it with PSA Data":
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how to validate infocube data by comparing it with psa data
    You can also quickly build in R/3 a query upon those tables using SQ01 tcode and Quick Viewer button for comparison.
    Best regards,
    Eugene

  • Error while assigning source system to SAP Remote cube

    I need to do a query using remote cube and the source sys is flat file.
    it wont let me assign the source system to the remote cube.
    the r/3 system icon is seen next to the remote cube technical  name...
    I can't  see my flatfile source system
    can somebody help me.

    Hello Kiran,
    I don't think its possible to use Remote cubes for PC_FILE or FLAT FILE as Source system.
    You can use Virtual Providers in these different scenarios
    Virtual Providers based on DTP => used for SAP source systems
    Virtual Providers based on BAPI => used for non-SAP or external systems
    Virtual Providers bsaed on Function modules => You use this VirtualProvider if you want to display data from non-BI data sources in BI without having to copy the dataset into the BI structures. The data can be local or remote. You can also use your own calculations to change the data before it is passed to the OLAP processor. This function is used primarily in the SAP Strategic Enterprise Management (SEM) application.
    Use InfoObects as Virtual Providers
    Your case is none of these.
    More details can be found under this link:
    http://help.sap.com/saphelp_nw70/helpdata/EN/84/497e4ec079584ca36b8edba0ea9495/frameset.htm

  • Source system - DSO-1 - DSO-2 - Cube (Performance improvement)

    Hello ppl,
    If I have a data flow like written below:
    Source system --> DSO-1 --> DSO-2 --> Cube
    Situation is:
    1.     Data load from source system to DSO-1 is full and can not be changed to delta itu2019s the restriction.
    2.     Amount of data flowing in DSO-1 is in millions daily (because it is full and growing).
    3.     Currently all DTPs in between source system and cube are maintained as full.
    Requirement is to improve performance, is it possible to make DTPs as Delta after DSO-1. Since I am deleting all records from DSO-1 and its PSA daily not sure if it will work? If yes how ?
    Any suggessionsu2026
    Regards,
    Akshay

    DTPs  delta is -->request based delta from write optimized DSO.The DTP should have the delta option selected.
    DTPs delta is -->request based delta only also for standard DSO.
    You will find initialization option in DTP on the execution tab(select DTP as type delta in extraction tab).
    Now since you will be doing init from DSO so system will also give you option to select table of DSO for extraction:
    i.e. Active/Changelog.
    You can select Active if you are using Write optimized DSO and Changelog if you are using standard DSO for DSO 1.
    Now coming bact to your case,using write optimized DSO will definately enhance the performance  as you don't have to activate the request..but I have recently had a very bad experience with write optimized DSO regarding delta.
    So I dont' trust write optimized DSO for pulling delta from it.
    if you want to use it ,test it as much possible before using it . I would still go with standard(the beauty of changelog table).
    Now regarding DTP ,once you load the the data from the DSO you can load init information even after the load.Else load the first request from dtp w/o data transfer for init and then run deltas..both will work fine.Its a similar option present in infopackages called "init without data transfer".
    Refer this thread ,a really nice discussion in this :
    BI7: "Delta Init. Extraction From..." under DTP Extraction tab?

  • Remote cube on Data Source 0FI_GL_4

    HI,
    I have a requirement of  reconciling  the Line Items from R/3 to BW.
    So i tried to create Remote cube with Direct access, but coudnt due to 0FI_GL_4 data source doesnt support Direct access and reconciliation.
    I can see that Data Source  0FI_GL_4  is being fetched by Function module BWFID_GET_FIGL_ITEM and the base table is DTFIGL_4.
    My question is can I create the Generic data source based on the function module by taking the copy of existing FM BWFID_GET_FIGL_ITEM  and change Direct access option to 2(Supports Preaggregation).
    Do i need to change any settings at table level and parameters to get the required solution.
    Please sugget me how to proceed the development to reconcile the Line Items from R3 to BW.
    Regards,
    Siva..
    Edited by: Siv Kishore on Apr 7, 2011 4:59 PM

    HI Banerjee,
    Thanks for the update.
    As you said i cant use the data source 0FI_GL_1, Because we need to reconcile many fields from R/3 to BW, but  those fields are not available in this data source and it will be complicated to enhance such fields.
    can I create View based generic data source on BSEK and BSEG tables, Will it work? if yes do i need to follow any criteria to get the Correct Line Items.
    Please suggest me is there any SAP suggested solution or if you worked on such kind of development.
    Regards,
    Siva Thottempudi..
    Edited by: Siv Kishore on Apr 12, 2011 4:06 PM

Maybe you are looking for