Parallel process in Application engine

could any one explain me what is parallel process in Application engine where temp table is use?
give me with example?

Parallel processing is used when considerable amounts of data must be updated or processed within a limited amount of time, or batch window. In most cases, parallel processing is more efficient in environments containing partitioned data.
To use parallel processing, partition the data between multiple concurrent runs of a program, each with its own dedicated version of a temporary table (for example, PS_MYAPPLTMP). If you have a payroll batch process, you could divide the employee data by last name. For example, employees with last names beginning with A through M get inserted into PS_MYAPPLTMP1; employees with last names beginning with N-Z get inserted into PS_MYAPPLTMP2.
To use two instances of the temporary table, you would define your program (say, MYAPPL) to access to one of two dedicated temporary tables. One execution would use A-M and the other N-Z.
The Application Engine program invokes logic to pick one of the available instances. After each program instance gets matched with an available temporary table instance, the %Table meta-SQL construct uses the corresponding temporary table instance. Run control parameters passed to each instance of the MYAPPL program enable it to identify which input rows belong to it, and each program instance inserts the rows from the source table into its assigned temporary table instance using %Table. The following diagram illustrates this process:
Multiple program instances running against multiple temporary table instances
There is no simple switch or check box that enables you to turn parallel processing on and off. To implement parallel processing, you must complete the following set of tasks. With each task, you must consider details regarding your specific implementation.
Define and save temporary table records in PeopleSoft Application Designer.
You don't need to run the SQL Build process at this point.
In PeopleSoft Application Engine, assign temporary tables to Application Engine programs, and set the instance counts dedicated for each program.
Employ the %Table meta-SQL construct so that PeopleSoft Application Engine can resolve table references to the assigned temporary table instance dynamically at runtime.
Set the number of total and online temporary table instances on the PeopleTools Options page.
Build temporary table records in PeopleSoft Application Designer by running the SQL Build process.

Similar Messages

  • BI 7.0 parallel processing of queries in a web application

    Hi,
    I'm currently having problems with a web application / web template with 10 data providers (different queries). When executing the web application the 10 queries are executed sequentially. Since each query takes about 30 sec., the complete execution time exceeds 300 seconds which is not satisfactory.
    Is there any way to enable parallel processing?
    Thanx in advance,
    Patrick

    Hello Patrick
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/41c97a30-0901-0010-61a5-d7abc01410ee
    /thread/351419 [original link is broken]
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/media/uuid/ff5186ad-0701-0010-1aa1-e11f4f3f2f68
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2b79ba90-0201-0010-1b9a-fa13a8f38127
    Thanks
    Chandran

  • Application Engine inbound Process

    Hi all,
    am new from peoplesoft.
    anyone knows how to write application Engine inbound process.
    please let me know.
    Thanks in advance.
    Regards
    RS

    Hey Inbound threw Application engine is a simple process
    take a new file layout from application designer
    drag & drop your record into file lay out definition page
    go to Preview panel & select segment (your record) name , browse the flat file
    go to properties -- use -- File Layout Format check exel format ok
    drag your file layout into AE actions ,automatically it generates code. now Run or else use this code
    Define a file,array,string,record
    Assign values to the above using functions GetFile , CreateArrayRept , text file for string,GetRecord .
    Check the file status open using IsOpen ,read the first value of flat file using ReadLine
    Push the text into array ,next step split
    assign to the record field values Local File &myfile;
    Local array of string &myarray;
    Local string &text;
    Local Record &rec;
    &myfile = GetFile("C:\Documents and Settings\Administrator\Desktop\office.txt", "r", %FilePath_Absolute);
    &myarray = CreateArrayRept("", 0);
    &rec = CreateRecord(Record.IN1_TBL);
    If &myfile.IsOpen Then
    While &myfile.ReadLine(&text);
    &myarray.Push(&text);
    &myarray = Split(&text, ",");
    &rec.IN1_NAME.Value = &myarray [1];
    &rec.IN1_LOCATION.Value = &myarray [2];
    &rec.Insert();
    End-While;
    End-If;
    &myfile.Close();

  • What happnes when we run any Application Engine Process

    Hi All,
    I have been trying to find the exact flow after we run application engine program either from PIA or from command line and got few resources also which explain what happnes behind the scene but due to mix explanations I am still not clear about it.
    Could any one please explain in steps the flow behind the scene after we run a application engine?
    To be more specific, Like when we run it, the process name along with other parameters(runcontrol, sessionid, process intance) must be written into some table and then Process monitor displays that process and updates its status frequenlty.....
    I hope you got my points..
    Thanks in advance!!

    Please query the records PS_PMN_PRCSLIST and PS_AERUNCONTROL to find the data about an app engine status

  • Parallel processing of mass data : sy-subrc value is not changed

    Hi,
    I have used the Parallel processing of mass data using the "Start New Task" . In my function module I am handling the exceptions and finally raise the application specific old exception to be handled in my main report program. Somehow the sy-subrc is not getting changed and always returns 0 even if the expection is raised.
    Can anyone help me about the same.
    Thanks & Regards,
    Nitin

    Hi Silky,
    I've build a block of code to explain this.
      DATA: ls_edgar TYPE zedgar,
            l_task(40).
      DELETE FROM zedgar.
      COMMIT WORK.
      l_task = 'task1'.
      ls_edgar-matnr = '123'.
      ls_edgar-text = 'qwe'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task2'.
      ls_edgar-matnr = 'abc'.
      ls_edgar-text = 'def'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task3'.
      ls_edgar-matnr = '456'.
      ls_edgar-text = 'xyz'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
    *&      Form  f_go
    FORM f_go USING p_c TYPE ctype.
      RECEIVE RESULTS FROM FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' EXCEPTIONS err = 2.
      IF sy-subrc = 2.
    *this won't affect the LUW of the received function
        ROLLBACK WORK.
      ELSE.
    *this won't affect the LUW of the received function
        COMMIT WORK.
      ENDIF.
    ENDFORM.                    "f_go
    and the function is:
    FUNCTION z_edgar_commit_rollback.
    *"*"Interface local:
    *"  IMPORTING
    *"     VALUE(LINE) TYPE  ZEDGAR
    *"  EXCEPTIONS
    *"      ERR
      MODIFY zedgar FROM line.
      IF line-matnr CP 'a*'.
    *comment raise or rollback/commit to test
    *    RAISE err.
        ROLLBACK WORK.
      ELSE.
        COMMIT WORK.
      ENDIF.
    ENDFUNCTION.
    ok.
    In your main program you have a Logical Unit of Work (LUW), witch consists of an application transaction and is associated with a database transaction. Once you start a new task, your creating an independent LUW, with it's own database transaction.
    So if you do a commit or rollback in your function the effect is only on the records your processing in the function.
    There is a way to capture the event when this LUW concludes in the main LUW. That is the PERFORMING whatever ON END OF TASK. In there you can get the result of the function but you cannot commit or rollback the LUW from the function since it already have implicitly happened at the conclusion of the funtion. You can test it by correctly comment the code I've supplied.
    So, if you  want to rollback the LUW of the function you better do it inside it.
    I don't think it matches exactly your question, maybe it lead you on the right track. Give me more details if it doesn't.
    Hope it helps,
    Edgar

  • Help needed creating export file from a file layout with Application Engine

    The following is what I would like to do:
    - Read a record from a PS view
    - Manipulate the data as needed
    - Write the fields out to a file as defined by a File Layout
    - Repeat until no more records are found
    I have created the PeopleSoft Application Engine action listed below. It receives an error "BCUNIT is not a property of class File".
    Local Record &rec1;
    Local File &myFile;
    Local SQL &sQL1;
    /* Create instance of Record */
    &rec1 = CreateRecord(Record.W9M_MBSCRSE_VW);
    /* Instantiate the Output File */
    &myFile = GetFile("c:\temp\help_me.txt";, "A", %FilePath_Absolute);
    If &myFile.IsOpen Then
    If &myFile.SetFileLayout(FileLayout.TACOURIN) Then
    /* Create SQL object to populate rowset */
    &sQL1 = CreateSQL("%Selectall(:1) Where INSTITUTION = :2", &rec1, W9M_MBSCRSE_AET.INSTITUTION);
    /* Cycle through the records */
    While &sQL1.Fetch(&rec1)
    /* I know this section is not coded correctly but I'm not sure how to fix it */
    &myFile.BCUNIT = "1";
    &myFile.BCTCD = &rec.W9M_MBS_TERM_CODE;
    &myFile.BCTYR = &rec.W9M_MBS_TERM_YEAR;
    &myFile.BCDPTN = &rec.ACAD_GROUP;
    &myFile.BCCOUR = substring(&rec.CATALOG_NBR,2,5);
    &myFile.BCSEC = &rec.CLASS_SECTION;
    &myFile.WriteRecord();
    End-While;
    Else
    /* Process FileLayout Error here */
    End-If;
    Else
    /* Process File Open Error here */
    End-If;
    &myFile.Close();
    There are probably a lot of things wrong with this approach and if you could provide some guidance and/or  corrections to the above logic I would greatly appreciate it.
    Another approach?
    After doing a bunch of reading on Application Engine maybe my approach is incorrect. Perhaps I should be doing something like the following:
    - Read a record from a PS view
    - Populate a temporary table manipulating data as it is inserted (Temp table is named according to the file layout fields?)
    - Fetch the records from the temp table and write the record to the file layout.
    - Repeat until no more records are found
    Is this approach better and designed correctly? If not, could you recommend how it should be done? Would the population and reading of the Temp table be done in separate actions or within the same action? Do you know of an Application Engine program that can be used as an example with "like" processing?
    As you can probably tell I haven't used Application Engine before and my goal is to start out on the right path. Thank you for any direction and input that you can provide.
    Steve

    I did and my initial logic was based upon them. I don't see where it shows how to manipulate the data before writing it to the file layout fields. Maybe you can send me a link to that section?
    I was hoping that I would be able to reference the file layout fields directly to allow for manipulating the field values. Re-reading the file layout section and the application engine PeopleBooks I believe I need to create a temporary record which matches the file layout fields; i.e., the second alternative that I listed. Then, make my updates to the temp record fields as I load them. Then, load them to the file layout as a row.
    I'm not sure how this would break down in Application Engine; would the insert into the temp table and the writerecord be different steps/actions, etc.

  • How to achieve parallel processing in a single request?

    Hi all,
    I have a method in a Session EJB that will perform some business logic before it returns an answer to the client. The logic it will perform is to collect data from the applications database and two external systems, before sending all data to a third external system to get a response and send it back to the client. Each external system is quite slow so I would like to do all the collecting of data concurrent, parallel processing. How should I handle this? I'm not allowed to create my own threads in EJB's. Can I use MDB in some way? To the calling client this should be a synchronous call...
    Greatfull for any suggestions
    Cheers
    Anders =)

    Usually, the request is received by a component located in the web container, such as by an HTTP request (including Web Services). This component is able to start threads to allow parallel processing. Now, if for some reason the request arrives directly at EJB level and that you cannot move its receiver to web component, I think JMS is not a viable solution because you will switch to asynchronous processing and you have no way to make your EJB wait for the responses while preserving the client request (waiting implies programmatic life cycle management, which is forbidden in EJB container). Maybe a resource adapter (JCA) can bring a solution. A resource adapter acts as a datasource (a datasource is a specialization of a resource adapter) and thus it is a logical way to implement an adapter to an external, eventually non-J2EE, resource, as the name implies :) But I don't have enough knowledge in JCA to be sure of this.
    Hope it helps.
    Bruno Collet
    http://www.practicalsoftwarearchitect.com

  • Parallel Processing in Oracle 10g

    Dear Oracle Experts,
    I would like to use the Parallel Processing feature on My production database running on Unix Box.
    No: of CPU in each node is 8 and its RAC database
    Before going for this option i would like to certain things regarding Parallel Processing.
    1. According to my server specification how much DOP i can specify.
    2. Which option for Setting Parallel is good
    a. Using the 'alter table A parallel 4' or passing the parallel hints in the sql statements
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.
    4. Query or DML - which one will be perform best with parallel option.
    5. What are the negative issue if parallel option is enabled.
    6. what are the things to be taken care while enabling the parallel option.
    Thanks in Advance
    Edited by: user585870 on Jun 7, 2009 12:04 PM

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • How to define "leading" random number in Infoset fpr parallel processing

    Hello,
    in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
    No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
    Unfortunately, the table that is selected by the framework is not suitable, because our
    "leading" ODS table which holds most of our selection criteria is another one.
    How to "convince" the parallel framework to select our leading table for the specification
    of the PACKNO in addition (this would be times 20 faster due to better select options).
    We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
    but it seems that note 999101 just fixes this for non-system-fields.
    But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
    fill_range_random instead of fill_range.
    Has anyone managed to assign the PACKNO of his choice to the infoset selection?
    How?
    Thanks in advance
    Volker

    Well, it is a bit more complicated
    ODS one, that the parallel framework selects for being the one to deliver the PACKNO
    is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
    amount of data to be retreived.
    Currently we execute the generated SQL in the best possible manner (by faking some stats )
    The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
    PACKNO is a generated random number esp. to be used for parallel processing.
    The job starts about 100 slaves
    Each slave gets a packet to be processed from the framework, which is internaly represented
    by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
    can be compared resultin in 90% of the already fetched rowes can be discarded.
    Basicly it goes like
    select ...
    from
      ods1 T_00,
      ods2 T_01,
      ods3 T_02,
      ods4 T_03
    where
    ... some key equivalence join-conditions ...
    AND  T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
    AND  T_01.TYPE = '202'  -- selective Value 10% on second table
    I'd trying to change this to
    AND  T_01.PACKNO BETWEEN '000000' and '000050'
    AND  T_01.TYPE = '202'  -- selective Value 10%
    so I can use a combined Index on T_01 (TYPE;PACKNO)
    This would be times 10 more selective on the driving table and due to the fact,
    that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
    It really boosts when I do this in sqlplus
    Hope this clearyfies a bit.
    Problem is, that I can not change the code either for doing the
    build of the packets or the one that executes the application.
    I need to change the Inofset, so that the framework decides to build
    proper SQL with T_01.PACKNO instead of T_00.PACKNO.
    Thanks a lot
    Volker

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Parallel Process Model vs Asynchrono​us Sequence

    I've been studying the features of TestStand, and learning how to use it for about a month, so still very new to the environment (although I have been using Labview and Veristand pretty heavily for about a year).  I wanted to get a little clarification on the use of the different process models, because I think I may be misunderstanding some of the terminology.  
    Here is a little background of my project:
    I have a Labview VI that I created to interface with a remote target (emulator).  I previously used the VI to run tests manually, and would like to use it as a code module in TestStand so that I can run automated tests.  I intend to use the same VI repeatedly throughout the test sequence.  The functionality of the system is dependent on maintaining constant communication with the emulator, so I can't be opening and closing the code module repeatedly.  Once it is open, it has to stay open and continually communicate  (I'm hoping I will not have to create "wrapper" code modules to be the go-between with my current VI).  Breaking communication would cause most of the test results to become invalid.  For these reasons, I had chosen to call the VI as a code module in a sub sequence so that it can be run asynchronously, outside of the main sequence.
    Now, as I learn more about the details of TestStand, I am introduced to the concept of "Process Models".  I had initially been using the default Sequential Process model, but would like to know if I should switch to the Parallel Process model.  From what I can tell, the parallel process model is used when testing multiple UUTs, or running tests in parallel.  Is this correct?  To clarify my situation, I will only be testing 1 UUT, I will only be using 1 code module, and I will be running several test steps with that 1 code module.  I will need to continually pass data back and forth with the code module as it runs in parallel to the main sequence, and there will likely be several sub sequences called during the process, so that I can maintain modularity with my testing.
    So the question is, do I switch to the Parallel Process Model, or should I continue with the Sequential Process Model and the asynchronous sequence to run my code module in parallel?  Thanks much.
    GSinMN          
    Solved!
    Go to Solution.

    Hey GSinMN,
    Are you wanting to run your test steps in parallel with each other, or will that need to be a sequential process? The Parallel model is probably not the right choice for this application. The purpose of the Parallel model is to run the same test program on multiple UUT's at once, as you mentioned. Since you are just testing with a single UUT, the best approach would be to run the emulator communication module asynchronously, as you mentioned. You can easily pass data to this code module using a TestStand queue or a similar synchronization object.
    Daniel E.
    TestStand Product Support Engineer
    National Instruments

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • How to model a parallel process in BPML

    I've just started using BPML to model a process for CAF. In the process, there will be a block of steps that will have to be repeated for a list of employees, in parallel.
    I want to model the parallel process in a BPML diagram; I've had a look through some introductory texts, but cannot see a way that I can represent a parallel process. Also, the exact number and identity of the employees is unknown at run time.
    Can anybody help with this?

    Hello Arun,
    I am using Microsoft Visio with a BPMN stencil:
    http://www.workflow-research.de/Downloads/BPMN/Frapu-BPMN_Template(v1.1).zip
    I am using it in accordance with the SAP white paper "Guidelines for Specifying Composite Applications.pdf"
    Tony.

  • Error OLAP processing one application - SAP BPC 7.0 MS

    Hi everybody,
    This is our problem, when i try to process my application called "Consolidation", it always fails and appears the following error message:
    "Cube process: Errors in the OLAP storage engine. The Attribute key cannot be found: Table dbo_tblFactConsolidation, column ACCOUNT, value: E1500."
    We tried to launch a full process of this dimension "ACCOUNT", but also it fails.
    Could anybody say me what is happening?
    Thanks for your help.

    Juan,
    Here is the solution for solving this issue.
    <Reason>
        -. Your fact table has invalid record. In your case, it is 'E1500'.
        -. It means, Account dimension doesn't have E1500 member but your fact table has that records.
           Therefore, Analysis Service can't process it.
        -. Usually it happens when user load data from outside source to BPC without validation.
        -. If you are using 'make dimension package' 5.1 SP8, it might be a bug. I saw that issue in that  version but this error doesn't happen when you create dimension from excel worksheet through BPC admin console.
    <Solution>
        -.  Select records from fact tables that has 'E1500' from account column.
        -.  Delete it.
    <Note>
         - Due to MS analysis Service error detect system, this invalid member will detect one by one.
           Therefore, you may need to repeat this step again and again.
         - One solution to avoid this is finding invalid members between fact and dim tables using Join query.
           Here is sample SQL query,
           select a.ID, b.account from mbrAccount as a right join tblFactFinance as b
           on a.ID = b.ACCOUNT where a.ID is null
           It will show all invalid member Id in account column from facttable  so that you can figure out which account was wrong. It can save your time a lot to avoid process cube again and again.
    I hope it will help you
    James Lim

  • Parallel Processing in ABAP

    Hi,
    I have a internal table that has object references in it. Each item in the table are indepenent of the other. I want to extract info from each object and convert it into a internal table so that i can pass it to an RFC function.
    So how can i do this extraction of the info from the objects in internal table in parallel.
    To use the STARTING NEW TASK, i created a fn module that is RFC enabled.... then i can't pass the object reference to this module. So how can do this?
    Also i read that this function module call will create a main or external session which has a limit of 6 per user session.Is this correct?
    If above can be done, I also wanted to restrict the no of parallel processes being executed at any point of time to be 5 or so.
    thanks in advance
    Murugesh

    Hi Murugesh,
    Parallel processing can be implemented in the application reports that are to run in the background. You can implement parallel processing in your own background applications by using the function modules and ABAP keywords.
    Refer following docs.
    <b>Parallel Processing in ABAP</b>
    /people/naresh.pai/blog/2005/06/16/parallel-processing-in-abap
    <b>Parallel Processing with Asynchronous RFC</b>
    http://help.sap.com/saphelp_webas610/helpdata/en/22/0425c6488911d189490000e829fbbd/frameset.htm
    <b>Parallel-Processing Function Modules</b>
    http://help.sap.com/saphelp_nw04s/helpdata/en/fa/096ff6543b11d1898e0000e8322d00/frameset.htm
    Dont forget to reward pts, if it helps ;>)
    Regards,
    Rakesh.

Maybe you are looking for

  • Text caption to disapear when press play

    I have a video fil on the slide, we want a text box/caption to tell the user press play button(on video) to start it and the text caption fade out when U have pressed play, not automatic fade in and fade out. Is that possible?

  • Files on external hard drive now Inaccessible

    Hello, I am currently using a 2010 13 inch 2.4 GHz Intel Core 2 Duo MacBook and running OSX Lion (I upgraded in July from Snow Leopard). I have recently come across a problem with my computer recognizing files that are on my external hard drive. This

  • What is 'normal' SNR variance?

    Just out of interest - is this normal, good or bad(?) if there is such a thing as normal, good or bad. During day light hours my SNR margin, when checked via 'RouterStats' linked to a Billion router, varies constantly within about a 1 db range. It dr

  • Windows AD user cannot use Promotion management, Why?

    A Procedure is distributed by SAP  how to Use "Promotion Management"  in a OTAP/ DTAP street. See Chapter/Secton 05 of this document: https://archivesaptechedhandson.hana.ondemand.com/contentArchive/AP263_Exercises_VHO_FINAL.pdf This is a very good/u

  • Archive gap in 9.2, archive logs are gone

    Hi all, I've got an old 9.2.0.6 DataGuard setup, it's been running fine for years. We recently had to move our DR server to a new datacenter, so it got shut down, shipped and brought back up. The database came up fine, but now I see that I've got a g