Use of global data in transformation with parallel processing

Hi,
In an upgrade I have a global variable in a routine in 3.5 in a transfer rule. The global variable keeps count of an ID.
The global variable is used in every data package,because the corrsponding infopackage is set to PSA only, where help says:
"If you select this processing option and then request processing is done serially during loading, the global data are maintained as long as the process with which the data were processed remains.
In 7.0 I also use a global variable to do the same thing, but an Infopackage is no longer available, because we use DTP´s now.
How can I store my global variable so that all parallel processing of the data packages use the same global variable? or can I set an option so that the DTP does serial processing, similar to the infopackage setting in 3.5?
As a workaround I increased the package size to a very large figure, but I would be more comfortable with a sounder solution.
thanks

Hi Max,
  Try to declare your global variable in start routine. Among the lines:
$$ begin of global - insert your declaration only below this line  -
$$ end of global - insert your declaration only before this line   -
Best Regards.
Javier Gómez

Similar Messages

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Query0;Runtime error time limit exceeded,with parallel processing via RFC

    Dear experts,
    I have created a report on 0cca_c11 cube and while running my report when i give cost center group which contains many cost centers , my report executes for long time and at last gives message
    "Error while reading data;navigation is possible" and
    "Query0;Runtime error time limit exceeded,with parallel processing via RFC"
    please tell me what is the problem and how can i solve this
    Regards
    Shweta

    hI,
    Execute the Query in RSRT with Execute and Debug option.
    Select SQL statements toknow where exactly it's taking time.
    Let us know the details once you done.
    Reg
    Pra

  • Data transfer transform with Oracle

    I am using a data transfer transform within a simple data flow.
    Source --> Query --> data transfer --> two queries --> two target tables.
    The transfer type is set to Table. The table is being created in a oracle 9i datastore. I have the drop and re-create before loading box checked. I execute my simple job and everything runs fine. When I go to my database to look at the contents of the table created within the data transfer transform, I cannot find the table (have USER with select any table privs).
    Where does the table go? Is it immediately dropped after it is used? If so, is there a way to override this? Having the table/contents hang around until the next time I run the job, is extremely helpful with debugging.
    Thnx!

    Update on this:
    I created a temp table manually in the database first. Temp table created has same name as specified in data transfer transform.
    I unchecked the "Drop and re-create before loading" box.
    I then added a TRUNCATE Pre-load command to empty the temp table before loading
    This method works fine.
    The Drop and re-create before loading option, seems to work differently with data transfer transform, when compared to template tables. This is less than intuitive. Its more like "drop, re-create and drop". Also, I could not find any details about this in the tech manual. To be fair, I am using version 11.7.2.3, maybe this has been updated in a more recent release.
    Cheers.

  • What is diff between Global and 2nd Part Global Data in Transformation

    Hi,
    Does anyone know what the difference between declaring global data in the 'begin of global' data area in the class declaration and the 'begin of 2nd part global' area outside the class...endclass in ABAP routines in a transformation??
    Thanks,
    Mark

    Hello,
    In the first global part you can write your declaration or code you want to be able to reach globally
    in the tranformation. The 2nd global part will be used for those transformations which are migrated from an update or transfer rule. Routines used there will be automaticaly generated into the 2nd global part.
    Best Regards,
    Paula Csete

  • Using Bloomberg Professional Excel Add-In with Parallels and Vista

    Has anyone successfully used the Bloomberg Excel Add-In for Excel 2007, Windows Vista, and Mac OS 10.5#? Bloomberg professional works fine, but I can't get any data to populate with the BDS(), BDH() or BDP() functions.

    Check the file permissions on c:\winnt\system32\msvcr71.dll. This needs to be 'Read/Execute' for 'everyone'. The add-in installation only gives permissions to the User who is installing the add-in. Which causes problems when other users need to access the Add-in. (Similar problem on Citrix / TS). <BR><BR>James Archer

  • Issues with parallel processing in Logical Database PCH and PNP

    Has anyone encountered issues when executing programs in parallel  that utilizes the logical database PCH or PNP?
    Our scenario is the following:
    We having have 55 concurrent jobs that execute a program that use the logical database PCH at a given time.  We load the the PCHINDEX table with the code below.
          wa_pchindex-plvar = '01'.
          wa_pchindex-otype = 'S'.
          wa_pchindex-objid_low = index_objid.
          APPEND wa_pchindex TO pchindex.
    We have seen instances where when the program is executed in parallel, with each process having its own range of positions id's, that some positions are dropped or some are added that is outside the range of the given process.
    For example:
    process 1 has a range of positions ID's 1-10
    process 2 has a range of positions ID's 11-20
    process 3 has a range of positions ID's 21-30
    Process 3 drops position 25 and adds position 46.
    Has anyone faced a similar issue?
    Thanks for your help.
    Best Regards,
    Duke

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • ERROR while uploading the data into ztable with background processing

    Hi gurus,
    i am trying to upload the data from excel file to internal table 
    its working fine ..
    but........
    if i try to upload the data with background processing , in sm37 it is saying "error during the upload of clipboard contents".
    Regards,
    Sri

    Hi,
    FM GUI_UPLOAD doesnt work in background, use dataset to upload it from application server.
    refer below code
    *--Local Variables
      DATA : l_file  TYPE string,
             l_line  TYPE string,
             l_index TYPE sy-tabix.
    *--Clear
      CLEAR : l_file.
      l_file = p_ipfile.
    *--Read the data from application server file.
      OPEN DATASET l_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      IF sy-subrc NE 0.
    *--Error in opening file
        MESSAGE i368(00) WITH text-005.
      ENDIF.
    *--Get all the records from the specified location.
      DO.
        READ DATASET l_file INTO l_line.
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          SPLIT l_line AT cl_abap_char_utilities=>horizontal_tab
                          INTO st_ipfile-vbeln
                               st_ipfile-posnr
                               st_ipfile-edatu
                               st_ipfile-wmeng.
          APPEND st_ipfile TO it_ipfile.
        ENDIF.
      ENDDO.
    Regards,
    Prashant

  • Need help with parallel process in background; not able to call FM in bgnd

    Hello,
      I am trying since 2 days to solve the issue of parallel process in background without using FPP.
    For which I want to call function module of class method in new task but to be processed by background process and not dialog.
    I searched so many websites but everyone has suggesteed to 'call function in background task' . But the fact is the processing of function happens by dailog process even in this case.
    I want to loop at table and call FM or class method inside each loop.
    Kindly suggest me how can I call function or class method in new task in everycall and prcoess it in background.
    thanks

    Balaji,
    Is the name of the button between single or double quotes?
    Regards,
    Dan
    Blog: http://DanielMcGhan.us/
    Work: http://SkillBuilders.com/

  • Using the new Data/Services generation with server push

    Have moved my Flex/BlazeDS app (which uses Blaze AMFChannel for request/response messages and StreamingAMFChannel for server push messaging) from Flex Builder 3 to Flash Builder 4. The new Data/Services works fine to generate service classes and value objects for the request/response messages, question is can it provide any help setting up a messaging consumer and value objects for the pushed data?
    rick holland

    I am so sorry - I meant AirPort Express - not extreme - but it's moot since I missed the point of your post - you want two outputs - and one to not be digital.
    For what it's worth - I still think the express might be your best bet.
    http://www.apple.com/airportexpress/airtunes.html
    It has combined digital/analog out - you plug in one and it switches the output depending on which connection is inserted. I would expect a very short wait until everything accepts digital input - expecially for people who want good sound quality. If you have to go digital to analog outside of the receiver/amp (where it really belongs) - why not let it be the express sitting right next to where you want the second "receiver" to be...

  • Using a flash drive stick USB with parallel on imac OS 10.4.10

    When I plug in a scandisk flash drive stick - usb port (cruzer micro) 4gig. OSX recognizes it, windows XP on parallel 3 does too, but when you click on do you want to bring it in - nothing happens. How do I get windows to accept the flash drive (stick) so I can see and use it?

    Since Parallels is not an Apple product, you're more likely to receive an accurate response by posting your question on the Parallels forums:
    http://forums.parallels.com

  • No increase in speed with parallel processing on quad-core

    I'm using the NI Vision vi "IMAQ correct calibrated image" on quite a large image.  This takes about 0,09 seconds to complete, and only uses 1 core of my computer.  (25% cpu time)   I always need to correct two images, so I made two parallel execution lines, expecting that I would get 50% cpu usage (i.e. two cpu's used), and still 0,09 seconds to complete.
    However, it takes 0,18 seconds to complete, with 25% cpu usage.  It seems the code is not executing in parallel...
    I checked the Vision vi, and it's simply a wrapper to a DLL call.   I think I remember hearing once that dll calls are single-threaded...   Is that the reason why I don't see any increase in speed?   Is there a way around this?  
    Thanks in advance.

    This is no good news then.
    But maybe you can introduce parallelism on your own?
    Depending on the algorithm, you are using, you CAN split up the image on your own and make calls on the algorithm for each fraction and then put the (altered) fractions together to a new image again. I have not tried something like this, so in the worst case, it does even slow things down.
    Nevertheless, if the image is large enough (so "split up" and "merging" overhead is negligible) AND the DLL itself can be called parallel (not sure if this works), you could possibly see some performance increase...
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • blank values using Global Address Cleanse Transform

    Hi,
    We are trying to cleanse Global Addresses using Global Address cleanse transform. (with USA and Global Engines). We are passing Locality, Region and Postal code as multiline items. In the output some of the records are not getting populated. For these records if we keep USA as defult country then the fields are getting populated. The problem is we cannot take USA as defult country because it has global addresses and for other countries also it is filling USA as country name. Why is it that without giving USA as default country the fields are not getting populated for some records?
    Below are some of the sample addresses.
    1)     10 INDUSTR. HWY MS6     LESTER     PA     19029
    2)     PO BOX_22964     JACKSON     MS     39225
    3)     306 EASTMAN     GREENWOOD     MS     38930
    4)     3844 W NORTHSIDE DR     JACKSON     MS     39209
    5)     259 W QIANJIANG RD     ZHEJIANG     CN     31440
    Can you please suggest a way to fill the countries for these addresses? Any inputs on this will be appreciated.
    regards,
    Madhavi

    Hi,
    As Lance indicates, you set up your address cleanse (for US I would suggest using the URAC transform) and map in your input fields as normal.  In the output, you will select to output postcode2 along with all the other standardized fields you want posted in the output.
    Note:  If an address is assignable using the CASS rules established by the USPS to the USPS referential data, the postcode2 will be populated.  In cases where it is not assignable, the postcode2 can be empty or the input postcode2 data could be preserved based on a user's settings.
    Thanks,
    Paula

  • Who knows a Std. global data type for vendor bank details (ESR modelling) ?

    Hi,
    we are going to design some simple partner in ESR to be implemented in different backends using SAp global data types.
    What i cannot find anywhere in there is a structure containing the standard fields of customer / vendor bank details. As this is a baisc to partner master - i hope anyone of you has done this and knows it.
    The funny thing thing is, there exists a type called "BusinessPartnerBankDetailsID",
    which is only used  as reference in payment transactions but nowhere in bank details
    Does anybody know a GDT to maintain BusinessPartnerBankDetails?
    Thanks in advance & br,

    No solution -closed

  • Dynamic Parallel Processing using Rule

    Hello,
    I am using a User Decision within a Block (ParForEach type) step to send work-items to multiple Approvers parallelly.
    For this I have created a Multi-line container LI_APPROVERS and bound &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& to &_LI_APPROVERS_LINE& in the "Parallel Processing" tab of the Block.
    Now in User Decision I am using Agent as Expression = &_LI_APPROVERS_LINE&. This is working perfectly fine if I fetch the values in LI_APPROVERS via a background method before "Block" step is executed.
    I want to know if we can do this using a "Rule" within the User Decision? Meaning approvers are determined by the Rule(through a FM) at the run time instead of fetching them beforehand. I created a custom Rule and tried passing it under Agents but it didn't work. I do not know what bindings need to be done and how each line will be passed to User Decision to create a work-item for each user.
    Or
    I should remove the Block step completely and directly use the User Decision Task with Parallel Processing option under Miscellaneous tab?
    Can someone please explain how to achieve this using a Rule and exactly what bindings are required.
    Thanks.

    Hi Anjan,
    Yes, that's exactly what I want to know. I saw your below response in one of the threads but could not understand exactly how to do it. Can you please explain it.
    You have all  your agents in one multiline container element in workflow.
    Then you take a block step with perforeach.
    Then create a custom rule which will import multiline element of agents , and a line_no. Then in the rule you populate the actor_tab with agents from that multiline contaier elemens of agent. The logic will take the agent from the multiline container[line_no].
    Then you take a activity step . In agent use your custom rule usin prpoer bindin of multiline element of agents and for line_no you pass _***_line from block container. Then workitem will sent to n no of people parrallaly.
    This is my current design:
    Activity returns agents in LI_APPROVERS.
    At Block: I have binding &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& --> &_LI_APPROVERS_LINE&
    At UD: I have Agents as Expression = &_LI_APPROVERS_LINE&
    I want to remove the Activity step (to get Agents in background) and replace with Rule within UD. What binding do I need from Rule to Workflow? How to get the "Line_no" from rule as you mentioned above.
    Thanks for your response.

Maybe you are looking for

  • How do i get bootcamp to work?

    I downloaded windows 8.1 from the microsoft store and dragged it onto the usb and i made sure it was FAT partition... But this error comes up Your bootable USB drive could not be created. Boot Camp only supports Windows 7 or later installation on thi

  • Offline search engine based on Oracle Text and XML SDK?

    Hi, I am pretty new to Oracle Text, so I am not shure if the question is correct. We have an intranet, with lots of MS Word, PDF and HTML files. The site is starting to get out of hand, there is to many information distributed all over it. We would n

  • GL Acct Net Chg Trial Balance -TCode

    Good Day, I'm looking for a transaction code that I can use to provide the net change on all GL accounts in our ledger. I ran a report to extract all the information for a particular month and need something to balance it against. I've tried S_ALR_87

  • Disable prompt to save file after signing

    I have created a PDF with digital signatures, then added submit button to submit form to next approver in the chain.  I do not want the signer's to be prompted to save the file locally, just forward the document with signature.  Can the prompt to sav

  • VPN Tunnel w/ 802.1X port authentication against remote RADIUS server

    I have a Cisco 892 setup as a VPN client connecting to an ASA 5515-X.  The tunnel works fine and comes up if theirs correct traffic.  I have two RADIUS servers I want to use certificate based authentication to, that are located behind the ASA 5515-X.