Planning Area datasource Parallel processing

Hello Experts,
Have anyone worked on using a parallel processing profile in a data source.
We are extracting the data from planning area for backup and extracting to BW for reporting purposes. To speed up the process, we have an option to implement parallel processing for the datasource related to the planning area.
Does this really work without any discrepancies in the data extracted?
Please provide some pointer on the concept.
Thanks,
Rag

Hi Raj,
In our project we are using 8 parallel processes with Block size 1000.
Regarding question 2: Ideally, yes both the iterations should result in same number of records, but data packets will be different based on Block Size. Please check whether you are referring to data packets or records.
Also if you have the check box marked in one of the iterations, the number of records may vary.
Regards,
JB

Similar Messages

  • Parallel Processing in creation of idocs

    Hi Gurus ,
    I am working on the EDI Inbound process with IDOCS . I am getting several idocs as per my process from legacy systems and i am supposed to merge couple of idocs based on certain conditions and then create new IDOCS which in turns create the Sales Order .
    The response time of this utility is very bad hence for performance optimization we are planning to apply Parallel processing concept in the creation of idoc.
    we have a function module which creats Sales order . I want to  call this Function module in parallel in different LUWs so that multiple sales order can be created in parallel .
    can anyone please help me and tell me the logic to  call a function module in parallel .
    thanks in advance
    regards,
    khushi.

    yes i did max performance things on the merging logic now i  have to create the idocs in the parallel . can you please help me in wrirting the code of creating idocs in parallel.
    thanks in advance.
    regards ,
    khushi
    Edited by: Khushboo Tyagi on Jan 19, 2009 4:24 PM

  • Parallel processing--share your suggestions

    Hi Experts,
       I have a program which is taking long time (min 10days)to run because of 4 millian data.Thing is i am planning to use parallel processing, i have gone through help documents.They have used one Syntax
    "CALL FUNCTION func ...STARTING NEW TASK task name "
    ans some additions.
    What this function module name?.Do i need to create a
    "RFC function module" and call my report from this function module.My problem is how to use my progarm and this SYNTAX.
    Can any will explain and give some suggestion on it.
    Note : I fine tunned this program Max.extend
    Thanks in advance.
    Murali

    Hello Murali,
    There are many considerations for starting the job in parallel.
    1. u should be able to logically break the job for parallel process to handle.
    2. When the jobs starts in parallel mode there is no sequence in which they will complete and so should be independant of each other.
    3. The FM which is called in CALL FUNCTION func ...STARTING NEW TASK task name should be RFC enabled.
    4. In order to process the jobs in parallel ur system should have atleast 3 dialog work process and should have one dialog process free when u start parallel processing.
    I am assuming u don't have the FM which is why u are asking what is the fucntion module name. I would suggest u need to modify ur code in such a way that u are able to create a FM which is going to be called in the CALL FUNCTION func ...STARTING NEW TASK task name.
    Abhijit

  • Parallel processing for increaing the performance

    various ways of parallel processing in oracle especially using hints
    Please let me knw if there exists any online documentation in understanding the concept

    First of all: As a rule of thumb don't use hints. Hints make programs too unflexible. A hint may be good today, but might make things worse in future.
    There are lots of documents available concerning parallel processing:
    Just go to http://www.oracle.com/pls/db102/homepage?remark=tahiti and search for parallel (processing)
    Due to my experience in 10g, enabling parallel processing might slow down processing extremely for regular tables. The reason are lots of waits in the coordination of the parallel processes.
    If, however, you are using parallel processing for partitioned tables, parallel processing works excellent. In this case, take care to choose the partitioning criterion properly to be able to distribute processing.
    If, for example, your queries / DMLs work on data corresponding to a certain time range, don't use the date field as partitioning criterion, since in this case parallel processing might work on just a single partition. Which again would result in massive waits for process coordination.
    Choose another criterion to distribute the data to be accessed to at least <number of CPUs -1> partitions (one CPU is needed for the coordination process). Additionally consider to use parallel processing only in cases where large tables are involved. Compare this situation with writing a book: If you are planning to have some people writing a (technical) book consisting of just 10 pages, it wouldn't make any sense at all concerning time reduction. If, however, the book is planned to have 10 chapters, each chapter could be written by a different author. Reducing the resulting time to about 1/10 compared to a single author writing all chapters.
    To enable parallel processing for a table use the following statement:
    alter table <table name> parallel [<integer>];If you don't use the <integer> argument, the DB will choose the degree of parallelism, otherwise it is controlled by your <integer> value. Remember that you allways need a coordinator process, so don't choose integer to be larger than <number of CPUs minus 1>.
    You can check the degree of parallelism by the degree column of user_/all_/dba_tables.
    To do some timing tests, you also can force parallel dml/ddl/query for your current session.
    ALTER SESSION FORCE PARALLEL DML/DDL/QUERY [<PARALLEL DEGREE>];

  • Datasource on APO Planning Area - Transportation Error

    Hi All,
                 I have created the Datasource on APO Planning Area. Datasource working fine check in RSA3 and also in BW side. when transporting the data source from APO Dev to APO QA i am getting following error and transport fails. Please suggest.
    Thanks
    Christopher
       Execution of programs after import (XPRA)
       Transport request   : AD1K909333
       System              : AQ3
       tp path             : tp
       Version and release: 372.04.10 700
       Post-import methods for change/transport request: AD1K909333
          on the application server: hllsap112
       Post-import method RSA2_DSOURCE_AFTER_IMPORT started for OSOA L, date and time: 20080725125524
       Execution of "applications after import" method for DataSource '9ADS_PP_APO'
       Import paramter of AI method: 'I_DSOURCE:' = '9ADS_PP_APO'
       Import paramter of AI method: 'I_OBJVERS:' = 'A'
       Import paramter of AI method: 'I_CLNT:' = ' '
       Import paramter of AI method: 'LV_CLNT:' = '100'
       DataSource '9ADS_PP_APO': No valid entry in table /SAPAPO/TSAREAEX
       Planning area for DataSource '9ADS_PP_APO' does not exist in target system
       Extract structure /1APO/EXT_STRU100002737 is not active
       The extract structure /1APO/EXT_STRU100002737 of the DataSource 9ADS_PP_APO is invalid
       Errors occurred during post-handling RSA2_DSOURCE_AFTER_IMPORT for OSOA L
       RSA2_DSOURCE_AFTER_IMPORT belongs to package RSUM
       The errors affect the following components:
          BC-BW (BW Service API)
       Post-import method RSA2_DSOURCE_AFTER_IMPORT completed for OSOA L, date and time: 20080725125532
       Post-import methods of change/transport request AD1K909333 completed
            Start of subsequent processing ... 20080725125524
            End of subsequent processing... 20080725125532
       Execute reports for change/transport request: AD1K909333
       Reports for change/transport request AD1K909333 have been executed
            Start of................ 20080725125532
            End of.................. 20080725125532
       Execution of programs after import (XPRA)
       End date and time : 20080725125532   Ended with return code:  ===> 8 <===

    Christopher,
    There seems to be no extract strucutre available for this data source in quality. This is creating the problem in quality. The extract strucutre which is created in your scenario would be available in temp folder and that will not be availbale for transport, so you need to have the datasource generated in quality and then you transport the active version to quality so that it will be available with the changes as that of development.
    Regards
    Vijay

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • APO- BI Datasource from Planning Area

    Hi All,
    I need help with APO-BI datasource generated from Planning Area.
    In the Dev environment we had two clients:
    DCLNT020 (Holds APO part) DCLNT010 (Holds BI workbench).
    So a datasource was generated from the Planning area in DCLNT020 --> it was replicated in DCLNT010 --> data from Planning Area was extracted to BI cube using this.
    Now we transported this datasource to the Test environment which has only one client (TCLNT010). I maintained the Source to target mapping there such that DCLNT020 -- TCLNT010 and DCLNT010 -- TCLNT010.
    However the Transport fails and the error message is:
    Cannot replicate DataSource
    Errors occurred during post-handling RS_AFTER_IMPORT for ISFS L
    If I go to the Test system and try to generate the transported Datasource directly from the Planning area again, it says this DataSource already exists. However I cannot see this datasource in the system even after replicating and refreshing multiple times.
    Please provide your inputs as to what might be wrong and hat I need to do to solve this.
    TIA
    Amrita

    Hi   Amrita Goswami
    Based on the above post it seems to be your maintain two clients in Dev one is for creation and another is for testing and when it comes to test environment your maintain only one client and maintain the DS in one it wont give any impact..
    Based on the error
    > +Cannot replicate DataSource+
    > +Errors occurred during post-handling RS_AFTER_IMPORT for ISFS L+
    There could be two reasons
    1) Needs to replicate the data source once you have imported it to test environ ment and than ran the program "RSDS_DATASOURCE_ACTIVATE_ALL" by giving the name of the source and DS name if its BI 7.0
    If its 3.x then have to execute the program :"RS_TRANSTRU_ACTIVATE_ALL" By specifying the transfer structure name.
    2) RS_AFTER_IMPORT  in some cases its because of improper transport of the update rules.
    Solution would be recollect the transport and release the DS transport first and execute the ( 1)Activities and then transport the remaining._
    Hope its clear a little..!
    Thanks
    K M R
    ***Even if you have nothing, you can get anything.
                                                But your attitude & approach should be positive..!****
    >
    Amrita Goswami wrote:
    > Hi All,
    > I need help with APO-BI datasource generated from Planning Area.
    >
    > In the Dev environment we had two clients:
    >
    > DCLNT020 (Holds APO part) DCLNT010 (Holds BI workbench).
    >
    > So a datasource was generated from the Planning area in DCLNT020 --> it was replicated in DCLNT010 --> data from Planning Area was extracted to BI cube using this.
    >
    > Now we transported this datasource to the Test environment which has only one client (TCLNT010). I maintained the Source to target mapping there such that DCLNT020 -- TCLNT010 and DCLNT010 -- TCLNT010.
    >
    > However the Transport fails and the error message is:
    > Cannot replicate DataSource
    > Errors occurred during post-handling RS_AFTER_IMPORT for ISFS L
    >
    > If I go to the Test system and try to generate the transported Datasource directly from the Planning area again, it says this DataSource already exists. However I cannot see this datasource in the system even after replicating and refreshing multiple times.
    >
    > Please provide your inputs as to what might be wrong and hat I need to do to solve this.
    >
    > TIA
    > Amrita
    Edited by: K M R on Feb 6, 2009 12:03 PM
    Edited by: K M R on Feb 6, 2009 12:18 PM

  • SAP job not using all dialog processes that are available for parallel processing

    He Experts,
    The customer is running a job which is not using all the dialog processes that are available for parallel processing. It appears to use up the parallel processes (60) for the first 4-5 minutes of the job and then maxes out about 3-5 processes for the remainder of the job.
    How do I analyze the job to find out the issue from a Basis perspective?
    Thanks,
    Zahra

    Hi Daniel,
    Thanks for replying!
    I don't believe its a standard job.
    I was thinking of starting a trace using ST05 before the job. What do you think?
    Thanks,
    Zahra

  • Explain Plan - Parallel Processing Degree of 2 and CPU_Cost

    Explain Plan - Parallel Processing Degree of 2 and CPU_Cost
    When I use a hint to use parallel processing with a degree of 2
    the I/O cost seems consistently divided by (1.8) but the cpu cost
    adjustment is inconsistent(between 2.17 and 2.62).
    Any ideas on why the cpu cost varies with each table ?
    Is there a formula to adjust the cpu_cost ?
    Thanks,
    Summary:
    The i/o cost reduction is consistent (divide by 1.8)
    Table 1: 763/424 = 1.8
    Table 2: 18774/10430 = 1.8
    Table 3(not shown): 5/1.8 = 3
    But the cpu cost reduction varies:(between 2.17 and 2.62)
    Table 1: 275812018/122353500 = 2.25
    Table 2: 7924072407/3640755000 = 2.17
    Table 3(not shown): 791890/301467 = 2.62
    Example:
    Oracle Database 10.2.0.4.0
    Table 1:
    1.) Full table scan on Table 1 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,1)*/
    * from table_1
    SQL> select cost,io_cost,cpu_cost from plan_table;
    IO_COST CPU_COST
    763 275812018
    2.) Process Table 1 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,2)*/
    * from table_1
    IO_COST CPU_COST
    424 122353500
    Table 2:
    3.) Full table scan on Table 2 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,1)*/
    * from table_2
    IO_COST CPU_COST
    18774 7924072407
    4.) Process Table 2 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,2)*/
    * from table_2
    IO_COST CPU_COST
    10430 3640755000

    The COST value is for the benefit of the CBO; not YOU.
    What should be more important to you is the elapsed run time for the SQL

  • How to change the Package of planning area and datasource to be transportab

    Hi every body,
                       I av created a datasource for the xxx planning area and saved in $tmp which is local package and cannot be transportable .and i would like to transport the planning area along with the data source.But before transporting the planning area along with the data sourse i av to change the package of the data source.I am not able to understand where should i change the package of the planning area and datasouce.
    Can any body help me out of this problem.
    Can be rewarded.
    Shashi

    Hi alaxander,
                    Thanx for your reply but before tranporting a datasource i av to transport the planning area bcoz it is the source.so how can i change the package of the planning area which is in $tmp (local object).
    Regards,
    Shashik

  • Reassigning Datasource to a new planning area

    Hi,
    Currently we have a datasource in a planning area. However this planning area was not as per the naming conventions and hence a new planning area had to be created. No we need to reassign the datasource to this new planning area. Is there any way to achieve this without deleting the datasource and recreating a new one with the same name?
    Thanks & Regards
    Dharmendra

    There may be possible approaches through backend database table updates. However I would not recommend that. You would need to regenerate the datasources. Else there will be surely issues while migrating or even while doing backups.

  • Regarding datasources on 1 planning area

    Hi all,
    Can i create more than 1 datasource for one planning area.
    Please suggest.
    Shashi.

    Thanx a lot for your answer,but we has alreary 1 datasource on 1 planning area ,if i create 1 more datasource on that planning area ,does it effect the existing datasource becoz we have appended 2 new fields in the existing data source which is not present in that planning area.So i would like to know if i create a new data source for that planning area does it effect the existing datasouse.
                              If you answer this it could be very help ful to us...........
                       Thanx
    Regards,
    Shashi.

  • Are WFP-filters processed in parallel?

    Hi,
    As described in the
    Filter Arbitration section of the docs, filters in different sublayers are processed independently, each sublayer providing a (potentially different) decision about what to do with the packet, then all these decisions are compared to come up with a final
    decision. Furthermore, sublayers are grouped into layers.
    This architecture clearly allows for a multi-core implementation, where different sublayers in the same layer could be processed in parallel (to me it seems there cannot be parallel processing between different layers, as they mostly represent different
    stages of the networking stack).
    So the question is simple: Does the networking stack take advantage of this, processing the same packet in different sublayers in a parallel fashion?
    Thanks, Karoly

    Hi,
    in case of ParForEach, the workflow log ( view with technical details ), my the node strucure of my block looks like this.
    -->Block1
             --> Branch 1
           --> Block1 
            --> <my synchronous send step>
       --> Branch 2
           --> Block1
            --> <my synchronous send step>
       --> Branch 3
           --> Block1
            --> <my synchronous send step>
       --> Branch 4
           --> Block1
            --> <my synchronous send step>
       --> Branch 5
           --> Block1
            --> <my synchronous send step>
       --> Branch 6
           --> Block1
            --> <my synchronous send step>
    The nodes printed bold are the nodes, where a workitem is created. So I have 6 different workitems created in my example.
    All these workitems have the same creation timestamp. But the nodes below ( send steps ) have different ( ascending)  timestamps, which means, that these nodes are processed sequentially.
    In case I change the mode to ForEach, the workflow log looks like this:
    -->Block1
    --> Block1
    --> Loop 1
       --> my synchronous send step
    --> Loop 2
    --> my synchronous send step
    --> Loop 3
    --> my synchronous send step
    --> Loop 4
    --> my synchronous send step
    --> Loop 5
    --> my synchronous send step
    Barbara

  • Generate Export datasource for planning area

    Hi,
    we are using SCM 4.0 version for running Demand planning.
    i have one strange problem. that is, to generate export data source for planning area, there is no option when
    right clicking on planning area(context menu).is this option is available somewhere else?. please give me a clue on how to generate export datasource for planning
    area in version SCM 4.0? or do i need to do some configurations?.
    Thanks in advance
    Ramakrishna

    Hi,
    You may try the following:
    1) From the menu: Demand Planning->Environment->Current Settings->Administration of Demand Planning
    2)From the list of planning areas, double click your Planning Area to display your planning area setup
    3) Then from go to the menu Extras->Generate Datasource
    Hope this helps.

  • Parallel Processing: Error message 00-250: "No CUA area available"

    Dear colleagues,
    I have implemented a parallel processing with asynchronous RFC for a large data analysis in CO-PC. I think I was able to implement everything properly as described in the sap help: http://help.sap.com/saphelp_nw04/helpdata/en/22/0425c6488911d189490000e829fbbd/content.htm
    But now I face one problem: My jobs are often cancelled with the error message 250 of class 00 "No CUA area available".
    I don't have a clue what this error message means. I couldn't find any help in the SDN, OSS or in the internet. Has anybody an idea how to handle this problem?
    Thx very much for your help!
    Marius

    In your program while you doing the CALL FUNCTION - STARTING NEW TASK, did you check how many work processors are available.
    Suppose you have 40K data in your internal table, and in one step you are passing 4 K data. Then 10 work processes are required. But in your group setting if you have define 9, then last one will fail. So before any 'submit job in new task' or any 'CALL FUNCTION - STARTING NEW TASK' statement if have to check if any work processes are available. if available then submit otherwise wait.
    Thanks
    Subhankar

Maybe you are looking for

  • Web Query with tabs

    Dear All, I am using the following code in a Web query which gives me different queries on the three tabs. I found this code on SDN. <HTML> <!-- BW data source object tags --> <HEAD> <TITLE>BW Web Application</TITLE> <link href="/sap/bw/Mime/BEx/Styl

  • Save option disabled in Reader 7 when another user works on the same PDF

    Hi, I'm a first time user of Acrobat Pro 8.1 and 9. I'm creating PDF forms. When I open the forms in Reader 7 and work on them, I can directly save it, I do not have to do 'Save as'. However, when I copy this document on another machine, open it in R

  • Client side disabling of Outlook anywhere in Outlook 2013

    Hi Our admins recently had to disable external access for Outlook while keeping ActiveSync for Mobile Clients working. This was done by placing the autodiscover service (autodiscover.ourexternaldomain.com) behind a TMG with two factor authentication,

  • Detecting mouse activity on PC in labview

    I tried the Serial loopback test vI and the serial to disk VI, but both the VIs failed to detect any activity at the serial port of my PC. Also, The VISA interactive driver was not able to detect the serial port. It gave an error saying the device is

  • Unbearably slow downloads on only Arch.

    I run several computers in my apartment, and every one has been getting a 3Megabit minimum connection on uncapped downloads... however since installing Arch, I can barely get 50k out of downloads. Websites seem to load just as quick as ever, but FTP