ABAP Data Flows - Parallel Execution?

Hi Guys,
If I have a Data Flow that includes within it multiple, lets say 3, ABAP Data Flows; I see that when the job is started, only 1 ABAP flow at a a time will run even though there is no precedence enforced and they could all kick off together.
Is there a way to get these ABAP flows to all run in parallel in SAP?
Thanks,
Flip.

Hi Flip,
Never actually tried to do this but I see that in the Performance Optimization guide they only specify that Dataflows and Workflows can be processed in parallel, so I would suppose then that if you placed each ABAP dataflow into its own separate dataflow and then encapsulated the 3 unlinked dataflows into a workflow then the ABAP dataflows should execute in parallel, since they are initiated by the dataflows.
Clint.

Similar Messages

  • Abap Data flow error in BODS

    Hi All,
    I have started to run a simple abap data flow in BODS so the data flow has been designed and it is executed then the below error is thrown.
    It seems to be some abap drivers issues , not the design flow.
    so anyone please suggest the actual issue here and let me know you need any further information.
    Thanks.
    Best Regards,
    Edu

    Hi Konakanchi,
    Please share the Job execution log for more details like RFC Configuration issue or BODS Issue, Mean while can you please check the RFC Connection Test through SAPGUI.
    Thanks,
    Daya

  • DS 4.2 get ECC CDHDR deltas in ABAP data flow using last run log table

    I have a DS 4.2 batch job where I'm trying to get ECC CDHDR deltas inside an ABAP data flow.  My SQL Server log table has an ECC CDHDR last_run_date_time (e.g. '6/6/2014 10:10:00') where I select it at the start of the DS 4.2 batch job run and then update it to the last run date/time at the end of the DS 4.2 batch job run.
    The problem is that CDHDR has the date (UDATE) and time (UTIME) in separate fields and inside an ABAP data flow there are limited DS functions.  For example, outside of the ABAP data flow I could use the DS function concat_date_time for UDATE and UTIME so that I could have a where clause of 'concat
    _date_time(UDATE, UTIME) > last_run_date_time and concat_date_time(UDATE, UTIME) <= current_run_date_time'.  However, inside the ABAP data flow the DS function concat_date_time is not available.  Is there some way to concatenate UDATE + UTIME inside an ABAP data flow?
    Any help is appreciated.
    Thanks,
    Brad

    Michael,
    I'm trying to concatenate date and time and here's my ABAP data flow where clause:
    CDHDR.OBJECTCLAS in ('DEBI', 'KRED', 'MATERIAL')
    and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) > $CDHDR_Last_Run_Date_Time)
    and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) <= $Run_Date_Time)
    Here are DS print statements showing my global variable values:
    $Run_Date_Time is 2014.06.09 14:14:35
    $CDHDR_Last_Run_Date_Time is 1900.01.01 00:00:01
    The issue is I just created a CDHDR record with a UDATE of '06/09/2014' and UTIME of '10:48:27' and it's not being pulled in the ABAP data flow.  Here's selected contents of the generated ABAP file (*.aba):
    PARAMETER $PARAM1 TYPE D.
    PARAMETER $PARAM2 TYPE D.
    concatenate CDHDR-UDATE ' ' into ALTMP1.
    concatenate ALTMP1 CDHDR-UTIME into ALTMP2.
    concatenate CDHDR-UDATE ' ' into ALTMP3.
    concatenate ALTMP3 CDHDR-UTIME into ALTMP4.
    IF ( ( ALTMP4 <= $PARAM2 )
    AND ( ALTMP2 > $PARAM1 ) ).
    So $PARAM1 corresponds to $CDHDR_Last_Run_Date_Time ('1900.01.01 00:00:01') and $PARAM2 corresponds to $Run_Date_Time ('2014.06.09 14:14:35').  But from my understanding ABAP data type D is for date only (YYYYMMDD) and doesn't include time, so is my time somehow being defaulted to '00:00:00' when it gets to DS?  I ask this as a CDHDR record I created on 6/6 wasn't pulled during my 6/6 testing but this 6/6 CDHDR record was pulled today.
    I can get  last_run_date_time and current_run_date_time into separate date and time fields but I'm not sure how to build the where clause using separate date and time fields.  Do you have any recommendations or is there a better way for me to pull CDHDR deltas in an ABAP data flow using something different than a last run log table?
    Thanks,
    Brad

  • Using ABAP DATA FLOW to pull data from APO tables

    I am trying to use an ABAP Data flow to pull data from APO and receive error 150301. I can do a direct table pull and receive no error, but when I try to put it in an ABAP data data flow I get the issue. Any help would be great.

    Hi
    I know you "closed" this, however someone else might read it so I'll add that when you use an ABAP dataflow, logic can be pushed to ECC - table joins, filters, etc.  (Which can be seen in the generated ABAP).
    Michael

  • Creating abap data flow, open file error

    hello experts,
    i am trying to pull all the field of MARA table in BODS.
    so i m using abap data flow.but after executing the job i got error "cant open the .dat file"
    i am new to abap data flow so i think may be i did some mistake in configuration of datastore.
    can any one guide me how to create a datastore for abap data flow???

    In your SAP Applications datastore, are you using "Shared Directory" or "FTP" as the "Data transfer method"?  Given the error, probably the former.  In that case, the account used by the Data Services job server must have access to wherever SAP is putting the .DAT files.  When you run an ABAP dataflow, SAP runs the ABAP extraction code (of course) and then exports or saves the results to a .DAT file, which I believe is just a tab-delimited flat text file, in the folder "Working directory on SAP server." This is specified from the perspective of the SAP server, e.g., "E:\BODS\TX," where the E:\BODS\TX folder is local to the SAP application server. I believe this folder is specified as a directive to the ABAP code, telling SAP where to stick it (the .DAT files). The DS job server then picks it up from there, and you tell it how to get there via "Application path to the shared directory," which, in the above case, might be
    SAPDEV1\BODS\TX" if you shared-out the E:\BODS folder as "BODS" and the SAP server was SAPDEV1.  Anyway: the DS job server needs to be able to read files at
    SAPDEV1\BODS\TX, and may not have any rights to do so, especially if it's just logging-in as Local System.  That's likely your problem. In a Windows networking environment, I always have the DS job server log-in using an AD account, which then needs to be granted privileges to the, in our example's case,
    SAPDEV1\BODS\TX folder.  Also comes in handy for getting to data sources, sometimes.
    Best wishes,
    Jeff Prenevost
    Data Services Practice Manager
    itelligence

  • ABAP Data flow

    Hi
    can we replicate ABAP data flow and do modifications for history data upload?

    It is  a copy and whatever changes you will make is not going to impact the other ABAP dataflows.

  • Data Service 4.2 upgrade issue - R/3 abap data flow error

    This error makes sense if you get it in PROD environment. But any idea if this can occur if we run against ECC-DEV environment.
    I don't think it makes sense to use execute preloaded option against DEV
    Steps performed for connecting to ECC through DS 4.2:
    1. Basis Imported the new functions into ECC which we got after raising an OSS with them.
    2. Gave the authorizations as per the manual.
    &#9679; S_BTCH_JOB &#9679; S_DEVELOP &#9679; S_RFC &#9679; S_TABU_DIS &#9679; S_TCODE
    3. Ran a simple R3 Data flow(Shared Directory transfer method) which resulted in error RFC_ABAP_INSTALL_AND_RUN:RFC_ABAP_MESSAGE, changes to repository object are not permitted in the client.
    Do we need more permissions than listed above to avoid this error??

    Hello,
    I run 'R3trans -x' command, but there was no problem - connection to database was working.
    Problem was following:
    Before starting the sdt service on host, I set environment variables JAVA_HOME and LD_LIBRARY_PATH for sidadm. That's not neccessary and that was the problem. Without setting these variables it is working now.
    Thanks,
    Julia

  • How to monitor data flow 's execution in web

    Hi ,all .
    Please help us with step-by-step setup to monitor dataflow's execution throw a web .

    Hi,
    Did you have the Metadata Navigator installed at you environment?
    It works like Operator plus several more functions...

  • How to use one dynamic connection managers for multiple parallel data flow tasks

    hi there:
       I have 6 databases residing on the same server. What I want to do is  call a store procedure with identical name on each database dbo schema and transport results to a centralized place. The key is to have those SPs run in parallel instead
    of in sequence as each SP may take around 10 mins to finish. 
    The simplest way is to create 6 OLE DB connection managers and create 6 DFT tasks. However, I do not want to maintain 6 OLE DB connection managers as there is a chance to have more connection  managers.
     What I did so far is to create a OLD DB connection manager and use expression to set up connectionString properties so that it will get populated by variables at run time. It is fine when running all SPs in a Foreach Loop Container. However, it takes
    around 60 mins to finish.
      When I try to run it in parallel ( basically created 6 DFTs but use only one Dynamic Connection Manager), the connection string gets confused therefore all DFT tasks failed.
       Does anyone here have some experience on this topic?
    Thanks
     hui
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

    Yes, basically, on the connectionString property of ONE OLEDB, you are using an expression to supply value and this expression is pointing to a variable. 
    In this case , you can update this variable from a table which contains many connection strings. It's good if you want to execute Store procedures in a sequential order. When in parallel mode, this will cause issues as connectionString gets overwritten. 
     I am thinking about using script task to exec sp.
     The whole idea is that I do not want to maintain a large number of Connection Managers. 
    Hope it helps
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --
    So you are not able to run parallel executions using same conn mgmr, even with dynamic connectionstring, is that correct? Yes, script task will be a way to go if you wish to execute it in parallel, you may connect to SS and query the proper conn string with
    SELECT/WHERE clause in each script > pass it to a script variable > use that script variable and execute the proc. This will require only two things to change in each script, the WHERE condition to get the conn string and the proc name (you may even
    get the proc names the same way you get conn string) and everything else will be same. Let us know how that goes. 
    Hope no two or more procs doing insert/update/delete on the same tables.

  • Sub data flow (Optimized SQL) execution order ?

    I am looking for a solution in Designer of Data Services XI 3.2.
    Is there a way to specifiy in a Data Flow
    (without using 'Transaction control' options and Embedded Data Flow )
    the order in which Sub data flows ( Optimized SQL ) are executed ?
    Thank you in advance.
    Georg

    First, if you are using MDX to calculate the value of C - don’t.  MDX script logic can be extremely inefficient from a processing and memory utilization standpoint vs. SQL logic even if the syntax is shorter.
    Logic executes in the order you place the code in the script.  You have three commit blocks and they would execute in that order.  I notice you don't have a commit after the calculation for C.  You should always put a commit statement after each calculation section or you can get uncommitted data even though there is an implied commit after the last line of code executes.  Don't get in the habit of relying on this.
    You can see the logic logs from the temp folders on the file server as suggested above but they will mainly give you the SQL queries generated which can be helpful in debugging scoping issues but they can be hard to sift through.
    I recommend trying putting a commit statement after your calculation of C and that will probably resolve the issue.  I also strongly suggest you switch the calculation to SQL logic to avoid performance issues when you start having these calculations run under high concurrency or on larger volumes of data than what you're probably testing with.

  • Data flows are getting started but not completing successfully while extracting/loading of the data

    Hello People,
    We are facing a abnormal behavior with the dataflows in the data services job.
    Scenario:
    We are extracting the data from CRM end in parallel. Please refer the build:
    a. We have 5 main workflows flows i.e :
       => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
       => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
       => Main WF3 has 1 DF in parallel.
       => Main WF4 has 3 DF in parallel.
       => Main WF5 has 1 WF & a DF in sequence.
    b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
    c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
    d. Observations in the Monitor Log:
    Dataflow---------------------- State----------------RowCnt------LT-------AT------ 
    +DF1/ZABAPDF
    PROCEED
    234000
    8.113      394.164
    /DF1/Query
    PROCEED
    234000
    8.159      394.242
    -DF1/Query_2
    PROCEED
    234000
    8.159      394.242
    Where LT: Lapse Time and AT: Absolute time
    If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
    In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.
    The row count for DF1 extraction is 234204 but, it got stuck at  234000.
    Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
    e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
    Observations in the Trace log:
    DATAFLOW: Process to execute data flow <DF1> is started.
    DATAFLOW: Data flow <DF1> is started.
    ABAP: ABAP flow <ZABAPDF> is started.
    ABAP: ABAP flow <ZABAPDF> is completed.
    Cache statistics determined that data flow <DF1>
    uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
    Statistics is switching the cache type to IN MEMORY.
    DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
    DATAFLOW: <DF1> is completed successfully.
    The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
    Note: The cache type is pageable cache, DS ver is 3.2.
    Please suggest.
    Regards,
    Santosh

    Hi Santosh,
    just a wild guess.
    Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.
    Some time reference does not work.
    Hope this should work.
    Regards,
    Shiva Sahu

  • Using asynchronous timer for data flow control

    Hi all,
      I am using system sleep to control the data flow (some digital lines and analog output). The pseudo code is something like this
    Sleep(150);
    // the following sections are exectuted in parallel
      #pragma omp parallel sections
        #pragma omp section
          DAQmxWriteDigitalLines(...); // output TTL to one digitla line
        #pragma omp section
          DAQmxWriteDigitalLines(...); // output TTL to another digitla line     
        #pragma omp section
          Sleep(2); // sleep 2ms
    // the following sections are exectuted in parallel
      #pragma omp parallel sections
        #pragma omp section
          DAQmxWriteDigitalLines(...); // output TTL to one digitla line
        #pragma omp section
          DAQmxWriteAnalogScalarF64(...); // analog output to one channel
        #pragma omp section
          Sleep(1); // delay 1ms
    // the following sections are exectuted in parallel
      #pragma omp parallel sections
        #pragma omp section
          DAQmxWriteDigitalLines(...); // output TTL to one digitla line
        #pragma omp section
          DAQmxWriteAnalogScalarF64(...); // analog output to one channel
    #pragma omp section
          DAQmxWriteAnalogScalarF64(...); // analog output to another channel
        #pragma omp section
          Sleep(11); // delay 11ms
    // ... other stuffs
    I am running windows XP and I know it is not possible to get realtime control but  I want a as precise timing as possible. Above code is not perfect but it works 95% of times. I just read an article about using the asynchronous timer to control the time delay. I try that idea with the following code frame
    int CVICALLBACK ATCallback(int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2)
    if (event==EVENT_TIMER_TICK)
    int *nextdelay = (int *)callbackData;
    SuspendAsyncTimerCallbacks();
    if (timerId>=0)
    double time;
    if (*nextdelay==0) time=2.0;
    else if (*nextdelay==1) time=1.0;
    else time=12.0;
    SetAsyncTimerAttribute(timerId, ASYNC_ATTR_INTERVAL, time);
    if (*nextdelay==0)
    #pragma omp parallel sections
    #pragma omp section
    DAQmxWriteDigitalLines(...); // output TTL to one digitla line
    #pragma omp section
    DAQmxWriteDigitalLines(...); // output TTL to another digitla line
    *nextdelay++;
    else if (*nextdelay==2)
    #pragma omp parallel sections
    #pragma omp section
    DAQmxWriteDigitalLines(...); // output TTL to one digitla line
    #pragma omp section
    DAQmxWriteAnalogScalarF64(...); // analog output to one channel
    *nextdelay++;
    else if (*nextdelay==3)
    #pragma omp parallel sections
    #pragma omp section
    DAQmxWriteDigitalLines(...); // output TTL to one digitla line
    #pragma omp section
    DAQmxWriteAnalogScalarF64(...); // analog output to one channel
    #pragma omp section
    DAQmxWriteAnalogScalarF64(...); // analog output to another channel
    *nextdelay++;
    ResumeAsyncTimerCallbacks();
    return 0;
    void main(void)
    int n = 0;
    int timeid;
    timeid = NewAsyncTimer(120.0/1000.0, 3, 1, ATCallback, &n);
    But it doesn't work. There is no compilation and runtime error but the timing just not right. I wonder do I have to suspend the timer in the callback function when I reset the delay for next call? If I do so, I am worry if it will apply too much delay (since I suspend and resume the timer in the delay) so it will cause even worse timing. But if I don't suspend the timer before I reset the time, what happen if the code running in the callback function not finished before the next callback arrive. It is quite confusing how to use asynchronous timer in this case. Thanks.

    Yeah, unfortunately the 6711 doesn't have clocked digital I/O.  There are only two counters anyway so even if you could use them to generate your signals you wouldn't have enough (*maybe* something with the 4 AO channels and a counter depending on what your output signals need to look like?  The AO channels can output "digital" as well if you write 0V or 5V only).
    A PCI DAQ card which does support clocked digital I/O and has 2 analog outputs is the 6221 (or if you could use PCIe the 6321 is a more updated version with two extra counters and some additional functionality).
    If there isn't a way to implement clocked outputs afterall, one thing you could do to make your code a little more efficient is to consolidate the writes.  You can put your digital lines into a single task and write them at ocne, and you can put your analog channels into a single task and write them at once as well.
    I'm not sure about the callback issue, you might find some more help in the CVI forum.  I don't think it's going to solve your underlying problem though as ultimately the execution timing of your software calls is at the mercy of your OS.
    Best Regards,
    John Passiak

  • R/3 data flow is timing out in Data Services

    I have created an R/3 data flow to pull some AP data in from SAP into Data Services.  This data flow outputs to a query object to select columns and then outputs to a table in the repository.  However the connection to SAP is not working correctly.  When I try to process the data flow it just idles for an hour until the SAP timeout throws an error.  Here is the error:
    R/3 CallReceive error <Function Z_AW_RFC_ABAP_INSTALL_AND_RUN: connection closed without message (CM_NO_DATA_RECEIVED)
    I have tested authorizations by adding SAP_ALL to the service account I'm using and the problem persists.
    Also, the transports have all been loaded correctly.
    My thought is that it is related to the setting that controls the method of generating and executing the ABAP code for the data flow, but I can't find any good documentation that describes this, and my trial and error method so far has not produced results.
    Any help is greatly appreciated.
    Thanks,
    Matt

    You can't find any good documentation??? I am working my butt off just.......just kiddin'
    I'd suggest we divide the question into two parts:
    My dataflow takes a very long time, how can I prevent the timeout after an hour? Answer:
    Edit the datastore, there is a flag called "execute in background" to be enabled. With that the abap is submitted as a background spool job, hence does not have the dialog-mode timeout. Another advantage is, you can watch it running by brwosing the spool jobs from the SAP GUI.
    The other question seems to be, why does it take that long even? Answer:
    Either the ABAP takes that long because of the data volume.
    Or the ABAP is not performing well, e.g. join via ABAP loops with the wrong table as inner.
    Another typical reason is to use direct_download as transfer method. This is fine for testing but it takes a very long time to download data via the GUI_DOWNLOAD ABAP function. And the download time would be part of the ABAP execution.
    So my first set of questions would be
    a) How complex is the dataflow, is it just source - query - data_transfer or are there joins, lookups etc?
    b) What is the volume of the table(s)?
    c) What is your transfer method?
    d) Have you had a look at the generated abap? (in the R/3 dataflow open the menu Validation -> Generate ABAP)
    btw, some docs: https://wiki.sdn.sap.com:443/wiki/display/BOBJ/ConnectingtoSAP

  • SAP R/3 data flow

    Hi all,
    I am working on the Accounts Payable Rapid mart . Can i have a job that first creates all the .dat files on the SAP working directory and another job that executes the .dat file from the application shared directory without having to again run the R3 data flow
    If you didnt get it ..
    1st job get the data from the sap r3 table and puts in the data transport ( ie..It writes the .dat file on the working directory of the SAP server).
    2nd job gets the .dat file from the application shared directory without having to do the first job again
    Is the above method possible if there is a way.
    I would really appreciate any comments or explanations on it.
    Thanks
    OJ

    Imagine the following case:
    You execute your regular job.
    It starts a first dataflow
    A first ABAP is started...runs for a while...then is finished.
    Now the system knows there is a datafile on the SAP server and wants to get it
    Because we configured the datastore to use a custom transfer program as download, the tool expects our bat file to download the file from the SAP server to the DI server
    Our custom transfer program shall do nothing else than wait for 15 minutes because we know the file will be copied without our intervention automatically. So we wait and after 15 minutes we return with "success"
    DI then assumes the file is copied and starts reading it from the local directory...
    The entire trick is do use the custom transfer batch script as a way to wait for the file to be transported automatically. In the real implementation the batch script will not wait but check if the file is finally available....something along those lines.
    So one job execution only, no manual intervention.
    Got it? Will it work?

  • Ssis 2012 parallel execution of ssis packages using catalog database hangs

    i have a simple ssis package in 2012 where I am executing several data flow tasks in parallel using sequence containers, however very often the sql job just runs forever although all the tables have been loaded or I cant see whats going on, the job neither
    fails nor succeeds, just goes on executing,is there an issue with executing several data flow tasks in parallel, why does the job continue to run forver, how to troubleshoot and fix the error, please guide, are there any issues using the catalog db execution,
    is that the reason this behavior is showing?

    multiple execute sql tasks within sequence containers running to execute the ssis packages deployed in ssisdb in sunchronous mode using the catalog database execution model
    all of the ssis packages that have the data flow taks retrieve data from source to 3 different tables.
    some of them push data into the same table.
    please let me know if you need more information.
    the job just runs forever, looks like all the data flow tasks ran and it is still running, because I don't see an increase in the row count of the tables, I may be wrong, if I check the execution status, 1 task does not show as succeeded, I don't know why
    thanks  a lot for ur help, ur help is much appreciated nik

Maybe you are looking for