7.0 data flow to 3.x dataflow :(

I'm curious to see if there are others who are in my situation and want to know how they are coping with it,
I have been on 7.0 dataflow from 2005 as part of a ramp up customer and have been on it since then. Now I moved to a new project and this customer still uses 3.x dataflow. I feel so lost and I feel as if I'm trying to relearn the old stuff again - and I feel I downgraded myself. 3,x dataflow looks so prehistoric with export datasources and infosources. I cant believe what I got myself into - it took me more than 15minutes to figure out the dataflow for a cube.
I'm looking forward to hear others experiences
back to 3.x days folks.

I recently got trained on BI and my trainer insists on first getting it right with the 3.X flow before progressing to the 7.X flow. He also tells me that the 7.X flow is still not completely "stabilized" and SAP releases frequent corrections in that area.
So now I would like to know if there's any iota of truth in it.
pk
ps: My trainer, I gather is well-versed with the BI system and has been a recent Top Contributor here in that area.

Similar Messages

  • Data Flow Migration without Infosource

    Hello All...
    I have installed a BI Content cube with the data flow. The data flow is in 3.5 ie, with transfer rules and update rules. I want to migrate the data flow into transformation and DTP. Is there any way I can convert the entire data flow into 7.0 dataflow without the info source. ie, I want only one transformation directly from Datasource to my cube. Is that possible using any tool, or i shud do that manually???
    Regards,
    BIJESH

    Hi,
    Have you migrated the update rule ?
    Once you have migrated the update rule, you will find the option of copying the transformation
    when u right click on the transformation.
    At that time you provide the source and target for the transformation.
    You can find the steps for migration in the link mentioned below.
    [Business Content Activation Errors - Transformations;
    I hope this helps.
    Thanks,
    Kartik

  • Error on Data Flow Task MSSQL 2012 Clustered "Description: The version of Lookup is not compatible with this version of the DataFlow. "

    We have an SSIS package that runs on clustered MSSQL 2012 Enterprise Nodes that is failing.  We use a job to executer the package.
    Environmental information:
    Product - Microsoft SQL Server Enterprise: Core-based Licensing (64-bit)
    Operating System - Microsoft Windows NT 6.1 (7601)
    Patform - NT x64
    Version - MSSQL Version 11.0.3349.0
    Package is set to 32 -bit.  All permissions verified.  Runs in lower environments, same MSSQL version.  All environments are clustered.  In the failing environment, all nodes are at the same service pack.  I have not verified if all
    nodes in the failing environment have SSIS installed.  Data access is installed.  We have other simpler packages that run in this environment, just not this one.  Time to ask the community for help!
    Error:
    Source: Data Flow Task - Data Flow Task (SSIS.Pipeline)     Description: The version of Lookup is not compatible with this version of the DataFlow.  End Error  Error:  Code: 0xC0048020    
    Description: Component "Conditional Split, clsid {7F88F654-4E20-4D14-84F4-AF9C925D3087}" could not be created and returned error code 0x80070005 "Access is denied.". Make sure that the component is registered correctly.  End Error 
    Description: The component is missing, not registered, not upgradeable, or missing required interfaces. The contact information for this component is "Conditional Split;Microsoft Corporation; Microsoft SQL Server; (C) Microsoft Corporation; All Rights
    Reserved; http://www.microsoft.com/sql/support;0".  End Error 
    (Left out shop specific information.  This is the first error in the errors returns by the job history for this package. )
    Thanks in advance.

    Hi DeveloperMax,
    According to your description, the error occurs when you execute the package with Agent job on clustered MSSQL 2012 Enterprise Nodes.
    As per my understanding, I think this issue can be caused by you use SQL Server Agent to schedule a SQL Server Integration Services package in a 64-bit environment. And the SSIS package is referencing some 32-Bit DLL or 32-Bit drivers which are available
    only in 32-bit versions, so the job failed.
    To fix this issue, we should use the 32-bit version of the DTExec.exe utility to schedule the 64-bit SQL Server Agent to run a package. To run a package in 32-bit mode from a 64-bit version of SQL Server Agent, we can go to the Job Step dialog box, then
    select “32 bit runtime” in the Advanced tab.
    Besides, we should make sure that SQL Server Integration Services is installed on the failing environment.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • R/3 data flow is timing out in Data Services

    I have created an R/3 data flow to pull some AP data in from SAP into Data Services.  This data flow outputs to a query object to select columns and then outputs to a table in the repository.  However the connection to SAP is not working correctly.  When I try to process the data flow it just idles for an hour until the SAP timeout throws an error.  Here is the error:
    R/3 CallReceive error <Function Z_AW_RFC_ABAP_INSTALL_AND_RUN: connection closed without message (CM_NO_DATA_RECEIVED)
    I have tested authorizations by adding SAP_ALL to the service account I'm using and the problem persists.
    Also, the transports have all been loaded correctly.
    My thought is that it is related to the setting that controls the method of generating and executing the ABAP code for the data flow, but I can't find any good documentation that describes this, and my trial and error method so far has not produced results.
    Any help is greatly appreciated.
    Thanks,
    Matt

    You can't find any good documentation??? I am working my butt off just.......just kiddin'
    I'd suggest we divide the question into two parts:
    My dataflow takes a very long time, how can I prevent the timeout after an hour? Answer:
    Edit the datastore, there is a flag called "execute in background" to be enabled. With that the abap is submitted as a background spool job, hence does not have the dialog-mode timeout. Another advantage is, you can watch it running by brwosing the spool jobs from the SAP GUI.
    The other question seems to be, why does it take that long even? Answer:
    Either the ABAP takes that long because of the data volume.
    Or the ABAP is not performing well, e.g. join via ABAP loops with the wrong table as inner.
    Another typical reason is to use direct_download as transfer method. This is fine for testing but it takes a very long time to download data via the GUI_DOWNLOAD ABAP function. And the download time would be part of the ABAP execution.
    So my first set of questions would be
    a) How complex is the dataflow, is it just source - query - data_transfer or are there joins, lookups etc?
    b) What is the volume of the table(s)?
    c) What is your transfer method?
    d) Have you had a look at the generated abap? (in the R/3 dataflow open the menu Validation -> Generate ABAP)
    btw, some docs: https://wiki.sdn.sap.com:443/wiki/display/BOBJ/ConnectingtoSAP

  • Data flows are getting started but not completing successfully while extracting/loading of the data

    Hello People,
    We are facing a abnormal behavior with the dataflows in the data services job.
    Scenario:
    We are extracting the data from CRM end in parallel. Please refer the build:
    a. We have 5 main workflows flows i.e :
       => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
       => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
       => Main WF3 has 1 DF in parallel.
       => Main WF4 has 3 DF in parallel.
       => Main WF5 has 1 WF & a DF in sequence.
    b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
    c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
    d. Observations in the Monitor Log:
    Dataflow---------------------- State----------------RowCnt------LT-------AT------ 
    +DF1/ZABAPDF
    PROCEED
    234000
    8.113      394.164
    /DF1/Query
    PROCEED
    234000
    8.159      394.242
    -DF1/Query_2
    PROCEED
    234000
    8.159      394.242
    Where LT: Lapse Time and AT: Absolute time
    If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
    In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.
    The row count for DF1 extraction is 234204 but, it got stuck at  234000.
    Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
    e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
    Observations in the Trace log:
    DATAFLOW: Process to execute data flow <DF1> is started.
    DATAFLOW: Data flow <DF1> is started.
    ABAP: ABAP flow <ZABAPDF> is started.
    ABAP: ABAP flow <ZABAPDF> is completed.
    Cache statistics determined that data flow <DF1>
    uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
    Statistics is switching the cache type to IN MEMORY.
    DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
    DATAFLOW: <DF1> is completed successfully.
    The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
    Note: The cache type is pageable cache, DS ver is 3.2.
    Please suggest.
    Regards,
    Santosh

    Hi Santosh,
    just a wild guess.
    Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.
    Some time reference does not work.
    Hope this should work.
    Regards,
    Shiva Sahu

  • SAP R/3 data flow

    Hi all,
    I am working on the Accounts Payable Rapid mart . Can i have a job that first creates all the .dat files on the SAP working directory and another job that executes the .dat file from the application shared directory without having to again run the R3 data flow
    If you didnt get it ..
    1st job get the data from the sap r3 table and puts in the data transport ( ie..It writes the .dat file on the working directory of the SAP server).
    2nd job gets the .dat file from the application shared directory without having to do the first job again
    Is the above method possible if there is a way.
    I would really appreciate any comments or explanations on it.
    Thanks
    OJ

    Imagine the following case:
    You execute your regular job.
    It starts a first dataflow
    A first ABAP is started...runs for a while...then is finished.
    Now the system knows there is a datafile on the SAP server and wants to get it
    Because we configured the datastore to use a custom transfer program as download, the tool expects our bat file to download the file from the SAP server to the DI server
    Our custom transfer program shall do nothing else than wait for 15 minutes because we know the file will be copied without our intervention automatically. So we wait and after 15 minutes we return with "success"
    DI then assumes the file is copied and starts reading it from the local directory...
    The entire trick is do use the custom transfer batch script as a way to wait for the file to be transported automatically. In the real implementation the batch script will not wait but check if the file is finally available....something along those lines.
    So one job execution only, no manual intervention.
    Got it? Will it work?

  • Using ABAP DATA FLOW to pull data from APO tables

    I am trying to use an ABAP Data flow to pull data from APO and receive error 150301. I can do a direct table pull and receive no error, but when I try to put it in an ABAP data data flow I get the issue. Any help would be great.

    Hi
    I know you "closed" this, however someone else might read it so I'll add that when you use an ABAP dataflow, logic can be pushed to ECC - table joins, filters, etc.  (Which can be seen in the generated ABAP).
    Michael

  • Are unique GUIDs required in the data flow task in SSIS?

    Hi,
    I have previously used SQL server 2008R2 to develop packages.  I was told not to copy and paste packages, or create templates with tasks on the data flow because this leads to multiple packages with the same GUID, the consequence of this might be calamitous
    problems at runtime when the engine can't distinguish between tasks in different packages.
    My process was to create templates with Control flow tasks only, manually create the dataflow tasks and then use bids helper from codeplex to regenerate GUIDs (this regenerated the GUIDs at the control flow level only).
    I am now using SQL 2012. I have looked and seen that if I copy and paste a package the GUIDs at the data flow level are also copied.  So my questions are:
    1) Does having the same ID in different dataflows cause problems as I was led to believe?
    2) If it does has this been addressed in some way in SSIS 2012? and what is the best way of working around this given I have many similar packages to develop?
    Thanks,
    Dan

    SSIS doesn't really lend itself well to creating templates.
    SQL 2012 is much improved when it comes to copy and pasting tasks, dataflows and transformations between package designs, other than the usual issues with missing connection managers.

  • BI 7.0 Data Flow in 7.0

    Hi SDN Guru's,
    I am working BI 7.0 version, I have Activated BI Content for SD & MM Modules, Actualy I am taking at only necessary object in Grouping (tap).
    Actualy I am Activating BI Content for BI 3.5 version, Now, I want to do in Data flow in BI 7.0 for SD & MM Modules.
    How to do data flow BI 7.0? Please provide me solutions ASAP.......
    Thanks & REgards,
    Kumar.

    Hi,
    Kindly follow below steps to convert in to BI 7 dataflow.
    If the Dataflow is from Data source to DSO and DSO to Cube
    -->First Step u2013 Select DSO to Cube update rule and right click on update rule -> additional function -> create transformation
    This will copy and map all the objects with routine if exist and do the syntax check once before activation
    -->Second Step u2013 Then select the transfer rule from Data source to DSO to do the same thing as above.
    -->Third step - Then migrate the Data source , in   this always select the u2018with exportu2019 option which  will enable to revert back to 3.x version .. If needed
    If from one data source many targets are getting updated then use infosource
    -->Create DTP for all
    In addition to this I will suggest you to use LSA architecture if it is fresh implementation.
    Regards,
    Ashish

  • How to create process chain for this data flow

    Hi experts,
    My data flow is:
    ======================================
    2lis_11_vahdr->Transfer rules->2lis_11_vahdr->Update rules->DSO1->Transformation->DTP->Cube1.
    2lis_11_vaitm->Transfer rules->2lis_11_vaitm->Update rules->DSO1->Transformation->DTP->Cube1.
    ======================================
    2lis_12_vchdr->Transfer rules->2lis_12_vchdr->Update rules->DSO2->Transformation->DTP->Cube1.
    2lis_12_vcitm->Transfer rules->2lis_12_vcitm->Update rules->DSO2->Transformation->DTP->Cube1.
    ======================================
    2lis_13_vdhdr->Transfer rules->2lis_13_vdhdr->Update rules->DSO3->Transformation->DTP->Cube1.
    2lis_13_vditm->Transfer rules->2lis_13_vditm->Update rules->DSO3->Transformation->DTP->Cube1.
    ======================================
    Here for each datasource info package brings data upto dso and then from dso dtp fetches the data.
    For deltas i want to keep this data flow in process chain.
    Anyone please guide me how to keep this in a process chain.
    Full points will be assigned.
    Regards,
    Bhadri m.
    Edited by: Bhadri M on Sep 2, 2008 7:20 PM
    Edited by: Bhadri M on Sep 2, 2008 7:21 PM

    Hello,
    Sure it is possible to maintain that dataflow.
    Start here:
    This is a very good document about process chains:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    Let me know if you have any doubts.
    Regards,
    Jorge Diogo

  • Data Flow Question in newly Migrated Business Content Install

    Hi, I just converted my Purchasing Business Content from 3.5 to 7.x.  This Content Loads the InfoCube directly from the Data Source.  After my DataFlow Migration I am left with still having an InfoSource. 
    Here is a picture of my Data Flow
    http://i55.tinypic.com/258b7zs.png
    I thought I would not have an InfoSource after all of this.  Is this correct?
    Thanks for any help with this...

    Hi, Kenneth,
    I beleive it's absolutley correct result after migration.
    I had the same thing with infosources when migrating 0sd_c03 and never had issues with that.
    Infosurces can be used in 7 data flow as sometimes it's a good thing to have additional transformation between source and target (allow for more flexibility in data transformation)
    By the way your infosurce also migrated as its a 7.x infosource now. I beleive it has a different object type compared to old 3.x infosurce. You can actually check the object type in se11, Tadir, select by your infosurce name.

  • Only one Extractor source is allowed in a data flow ?

    Hi,
    We have a common scenario (I think this is common for SAP ECC in general) where we have 2 tables, *_ATTR (Attributes) and *_TEXT (descriptions).  We extract these using extractors.
    Initially when we tried to place 2 in a dataflow, we got an error saying 'Only one Extractor source is allowed in a data flow'.
    To work around this, we created 2 embedded dataflows and everything seems to work fine.  But when I do a validate all, I get the error:
    [Data Flow:DF_ODS_MD_WBS_ELEMENT]  Only one Extractor source is allowed in a data flow. (BODI-1116145)
    Should I be worried ?  My gut feeling is this is a bug in the validate all function, but I could be wrong.  Everything seems to run fine, so is it safe to ignore this ?
    This currently effects about 5 dataflows, but we have about 40 more to implement like this, so I don't want to get it wrong.
    Thanks,
    Leigh.

    Like it says, that is invalid xml. It looks like it is a bug
    in the example doc.
    Add a root node to the xml and you should be ok.
    Tracy

  • How to run just the data flow

    We are in the process of converting from one ETL tool to Data Services 3.0.  I have a workflow with several data flows underneath it and have to make a change to the data flow.  Is there a way to run just the dataflow without executing the whole job?  For example, I just want to run DF4 below.
    WF --> DF1 --> DF2 --> DF3 --> DF4 --> DF5
    Thanks,
    Dan

    Although this topic is solved, just a few clarifications:
    When you create a new job and drag 'n drop the workflow/dataflow into it, you are not creating a second object (class), you are calling the object a second time (instantiate). It is like a BASIC program. I created a sub-procedure and called in DF1. In my filesystem I can see that sub-procedure as a seperate file in the folder object_library -> dataflow_procedures.
    Then I have two main programs, both just calling that sub-procedure with "gosub DF1".
    In the job you see the calls, when you drill into DF1 you see its definition and what object it calls.
    Using a conditional is okay but not adviced in my opinion. You are losing too much, the handling is complex and development takes a few seconds longer. my two cents only.
    https://wiki.sdn.sap.com/wiki/display/BOBJ/Testing

  • Order preservation in data flow

    Hi Experts,
                    I would like to know whether the order of data flowing through a dataflow is preserved or not.
    consider the scenario as follows.
    I have a source table
    I fetch it to a query transform to set an error flag based on some column values(say if column1 or column2 or column3 is null then error flag is E else null)
    Then I have a case statement to split the flow into two based on the error flag value( If Error flag is E then Error else success )
    on both the flows i am having one table comparison each before loading it to target tables.
    Since i am using sorted input option in table comparison I need the data to be sorted. Should i do it separately using one query transform each before the Table comparisons or can i use an order by in the first Query transform itself. Also If i am using the order by in the first query transform, Will this operation be pushed down to the DB?
    Thanks & Regards
    Alex Oommen

    Hi Ramesh,
                   This is just a sample scenario I mentioned. The main intention is that, if the order is preserved then we can do the order by in the retrieval time itself, so that this operation can be pushed down to the DB.
    Here in both the legs Error and Success, we are doing an "order by". If its possible to put the order by Q_Fetch Query so that the data retrieved itself will be sorted

  • Creating abap data flow, open file error

    hello experts,
    i am trying to pull all the field of MARA table in BODS.
    so i m using abap data flow.but after executing the job i got error "cant open the .dat file"
    i am new to abap data flow so i think may be i did some mistake in configuration of datastore.
    can any one guide me how to create a datastore for abap data flow???

    In your SAP Applications datastore, are you using "Shared Directory" or "FTP" as the "Data transfer method"?  Given the error, probably the former.  In that case, the account used by the Data Services job server must have access to wherever SAP is putting the .DAT files.  When you run an ABAP dataflow, SAP runs the ABAP extraction code (of course) and then exports or saves the results to a .DAT file, which I believe is just a tab-delimited flat text file, in the folder "Working directory on SAP server." This is specified from the perspective of the SAP server, e.g., "E:\BODS\TX," where the E:\BODS\TX folder is local to the SAP application server. I believe this folder is specified as a directive to the ABAP code, telling SAP where to stick it (the .DAT files). The DS job server then picks it up from there, and you tell it how to get there via "Application path to the shared directory," which, in the above case, might be
    SAPDEV1\BODS\TX" if you shared-out the E:\BODS folder as "BODS" and the SAP server was SAPDEV1.  Anyway: the DS job server needs to be able to read files at
    SAPDEV1\BODS\TX, and may not have any rights to do so, especially if it's just logging-in as Local System.  That's likely your problem. In a Windows networking environment, I always have the DS job server log-in using an AD account, which then needs to be granted privileges to the, in our example's case,
    SAPDEV1\BODS\TX folder.  Also comes in handy for getting to data sources, sometimes.
    Best wishes,
    Jeff Prenevost
    Data Services Practice Manager
    itelligence

Maybe you are looking for

  • Wireless Connection after 10.4.8 Update

    I want to try and update again to 10.4.8. Has the wireless connection issue been resolved yet? The problem I was having before was when my mac went into sleep mode or I would reboot the wireless connection would be dropped. I tried just about everyth

  • Can I use new 1080/30p Sony FX-1000 & avoid iM '09 "interlace" problems?

    The whole interlace/deinterlace/codecs, etc. problems are all way beyond me. Can I avoid all these '09 iM/iDVD image quality problems (my final goal is a great looking DVD just like I was able to do way back in the iMovie 4 days with SD) with a 1080/

  • IPod Video + camera connector issue

    I have iPod Video 60G and recently bought camera connector to have an ability to download pictures from my camera (Sony DSC-V1) on iPod. Also I have Belkin card reader 8 in 1 (F5U248). What I have read from Apple site and other different sources both

  • Wireless problems of TC

    i'm living all the problem of TC... just since somedays, i found a stable use of it... i don't know exactly why, but i changed some preferences... i try to connect this new stable way, to the change of the system of the password... i set it to WEP...

  • Advice on Microphones for a novice

    I am a novice. I am about to buy a Panasonic GS-150 miniDV camcorder. I want to make a documentary. I want to find a decent microphone to use within the $100-150 range. Any advice. I was thinking a wireless lapel with two units. Any imput would be he