Export memory using shared buffer

Hi
Lets say a user opens the PO screen ME22N in 2 separate windows accessing 2 separate PO numbers. If  i use the export memory using shared buffer , how can i ensure that the data will not get mixed up ?
Any ideas?

YOu would have to get the session id to distinguish between the two.  You can then use this id as part of your key to pass to the export statement.
Check this thread.
Quickest way to retrieve modeinfo[n],context_id_uuid from an ABAP pgm
Regards,
RIch Heilman

Similar Messages

  • Export .. to shared buffer ...

    I try to use this command but I don't understand something in the example of the Help. (F1 on shared buffer). (the bold)
    TABLES INDX.
    TYPES: BEGIN OF ITAB3_TYPE,
              CONT(4),
           END OF ITAB3_TYPE.
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'KEYVALUE',
          F1(4), F2 TYPE P,
          ITAB3 TYPE STANDARD TABLE OF ITAB3_TYPE WITH
                     NON-UNIQUE DEFAULT KEY INITIAL SIZE 2,
          WA_INDX TYPE INDX.
    Fill data fields before CLUSTR
    before the actual export
    INDX-AEDAT = SY-DATUM.
    INDX-USERA = SY-UNAME. Export data.
    EXPORT F1    FROM F1
           F2    FROM F2
           ITAB3 FROM ITAB3
           TO SHARED BUFFER INDX(ST) FROM WA_INDX ID INDXKEY.
    Frédéric
    (SAP 4.6C)

    Because when everybody works on the same table, you cannot guarantee data consistency. Furthermore, the "free components" of INDX are just an example. You might want to define your own fields with your own specific information. Here comes the full description how such at table must be created. Note the second to the last entry. You are totally free to add components as you like and you are not bound to SAP's INDX-example!
    <b>INDX-type structure:</b>
    The first field must be a key field named MANDT of type CLNT for the client, if you want to store the data objects client-specifically. For a cross-client storage, this component does not apply.
    The second field must be a key field named RELID of type CHAR and length 2. It stores the area ar specification.
    The third field must be a key field of type CHAR named SRTFD with a maximum length of 55 characters. It stores the identifier specified in id.
    The fourth field must be a key field named SRTF2 of type INT4. It contains the row numbers of a stored data cluster that can extend over several rows and is filled automatically by the system.
    Then any number of components with freely selectable names and types may follow. They are provided with values by the specification of FROM wa. Addition TO wa of the IMPORT statement exports these fields again.
    The last two components must be named CLUSTR and CLUSTD and be of types INT2 and LRAW of any length. In CLUSTR, the current length of field CLUSTD of each row is stored, while CLUSTD contains the actual data cluster.

  • Issues with Export/Import using Database & Shared buffer

    Hi All,
    I have a method that calls a program via a job and I am having issues passing data between the two.
    Note that two different users process the method and program (via job) resp. This is how I am calling the second prog-
    SUBMIT ZPROGRAM
                 VIA JOB     l_jobname
                     NUMBER  l_jobcount
                     USER    i_user
                 AND RETURN.
    I need to pass data from method to  the second program and vice versa and then the method continues its processing with the data acquired from the second program.
    I have tried using Import/Export using database and also shared buffer. What I have found is that most of the times I am able to pass data from method to program. However the job takes a couple of min to execute and I think that the data is not making back to the method in time.
    I have looked at some useful forum links-
    Problem with export/import in back ground
    Re: EXPORT/IMPORT  to MEMORY
    but havent been able to find an answer yet. Any solution? Thanks in advance for your help!
    Liz

    Hi Suhas, Subhankar
    I have tested the scenario without the job previously itself and it works. Thats the reason, i am trying with the job now as my requirement is that I need to change the user while executing the second report.
    Here is an example of my import/export - I am passing the return value from the second report to the first.
    Code in second report-
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'RET1'.
    INDX-AEDAT = SY-DATUM.
    INDX-USERA = SY-UNAME.
    EXPORT RETURN1 TO SHARED BUFFER INDX(ST) ID INDXKEY.
    Code in first report -
    SUBMIT ZPROGRAM
                     VIA JOB     l_jobname
                         NUMBER  l_jobcount
                         USER    i_user
                     AND RETURN.
    Once Job close FM is executed successfully, I import the values as follows
    IMPORT RETURN1 TO RETURN1 FROM SHARED BUFFER INDX(ST) ID INDXKEY3.
    INDXKEY is having value RET1.
    However Return1 is not having any values in first report. It has some value in executed without the job
    Please note that I have tried Export/import with Database too and I am getting the same results.
    Thanks for your suggestions.
    Regards, Liz

  • EXPORT / IMPORT  TO/FROM SHARED BUFFER

    Hello all,
    I am facing a problem with the EXPORT/IMPORT to SHARED BUFFER statements.
    In my report program , I export data to the shared memory.
    I then call a transaction to park an accouting document.
    The BTE 2218 gets triggered in the process. Here the IMPORT works fine.
    Later, there is a standard function module which is called IN UPDATE TASK.
    Within this, the IMPORT statement fails.
    It works on one server but not on another.
    Notes :
    The IMPORT works in debugging mode but fails if I simply run.
    Another point is that the ID used for identifying the shared memory uses sy-uname.
    Can the visiblity of sy-uname in UPDATE TASK be controlled by settings ?
    Any ideas on this ?
    Please don't copy paste the help on SHARED BUFFER etc.
    Thanks in advance.

    Hi Mariano,
    the issue is to due to multiple servers present where SHARED MEMORY is specific for each application server.
    So we export data into shared memory in program A, we have to be sure, that program B or FM which is called in background or update task by program A runs on the same application server
    Here, the problem is when program A calls the program B or FM in background or update it’s a dynamic scheduling to all application server with have batch work processes and not the same application server that of calling program A always, so program B runs on another application server which has different shared  memory.
    Solution will be:-
    To Force program B to run on same application server as of calling program A by
    passing sy-host of calling program A to Function module “JOB_CLOSE” parameter
    name “TARGETSERVER”. OR
    Instead of using SHARED MEMORY we will use DATABASE.
            EXPORT itab FROM itab  TO DATABASE indx(ar) CLIENT sy-mandt ID job_number in programA where job number is unique.
            Then IMPORT itab TO itab FROM database indx(ar) CLIENT sy-mandt ID job_number  in program B Where job number is passed from program A to B.
            Then DELETE FROM DATABASE indx(ar) CLIENT sy-mandt ID job_number.
    Regards,
    Vignesh Yeram

  • Export to shared buffer performance Implications

    All,
    I am using the following statement in one of the programs to export an internal table
    EXPORT it_dtab TO SHARED BUFFER indx(st) ID 'ZMME'.
    I like to know what the performance implications by using this ?

    Helo,
    You could check the current memory limit set for SAP shared space @ transaction RZ11 .
    Pass the profile parameter as rsdb/obj/buffersize and check the field 'Current Value' .
    You could discuss with your Basis counterpart if there's a need to increase the shared memory space which
    is generally not favourable as performance of other sap applns can come down on doing so.
    Regards
    Dedeepya C

  • Shared Buffer (Synchronizing)

    Hi,
    I have some data that i export to a shared buffer using:
      EXPORT ...
             .. some data ...
      TO SHARED BUFFER ... ID ...
    I know that is data is accessible by all users on an application server, but my question is
    whether or not this data is synchronized with other application servers as covered by the buffer
    synchronization process:
    http://help.sap.com/saphelp_webas630/helpdata/en/c4/3a6dbb505211d189550000e829fbbd/frameset.htm
    I have read many articles and postings but couldn't find a definite answer. Could any one with
    knowledge in this area help me with an answer?
    Thanks,

    a@S -
    I've taken your answer as "final" for the following reason.
    At this site, ECC>APO CIF will run in batch only (via RIMODACT/2) and the schedulers can tie the batch job to one serverr even if our prod APO instance is multi-WAS.
    So this means we can use shared memory objects OR shared buffer, and the question is which the site wants to do.
    Sometimes, as you know, it's better to stay on the rusty edge and forget about the cutting/bleeding edges.
    Best as always
    djh

  • Export INTERNAL TABLE to shared buffer

    Hi all,
    My requirement:
    Export INTERNAL TABLE to shared buffer or SAP Memory.
    Any help will be appreciated.
    Can SET/GET parameter be adopted for internal tables?
    Thanks,
    Tabraiz

    EXPORT (OBJ_TAB) TO MEMORY ID 'ABCD'
    also refer to
    http://help.sap.com/saphelp_45b/helpdata/en/34/8e73a36df74873e10000009b38f9b8/content.htm

  • Import/Export code using Memory ID in BO Method

    Hi experts,
    I am having a approver name and other relevant data in my report. I don't want to write the entire code I want to bring it into my BO method by using import/export memory id. Pl. guide me what should I do ? Is it possible or not?
    Thank you,
    Saquib

    I am a bit confused that what you are trying to achieve. In any case I think you can forget any export/import memory ID related solutions - they will not work!
    The workflow (or its step/task) is executing your BO method, right? You want this method to have some data when it gets executed? Normally you would want to populate the data to the workflow (or task) container, for example with function SAP_WAPI_WRITE_CONTAINER (you just need the work item ID). Then when this data is in the container, you can use it in your method (binding required).
    Somehow I feel that you looking a difficult solution for a simple problem. If you need some relevant data in your workflow, let the workflow to find it (add a new step to the workflow, and copy/paste the relevant part of the code of your report to this step). (Or try to give the data to the workflow already when it gets started, if possible). Don't try to mix things with some separate report, unless it is completely necessary, and if it is, then writing into the container is most likely the best approach.
    Regards,
    Karri

  • Using Shared Memory in LabVIEW

    I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C with LabWIEW for use shared Memory?

    Lidia,
    Check these out (for memory mapping):
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000006A1D0000&UCATEGORY_0=_318_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=build+cvi+shared+dll&USEARCHCONTEXT_QUESTION_S=0
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000005BC10000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=Communicating+Between+Built+LV+App&USEARCHCONTEXT_QUESTION_S=0
    But in general you don't need to use this when you use dll's. It is used to
    share data between different processes. If you need LabVIEW data in a dll,
    try to pass it as a pointer to an array, or as a string pointer.
    Regards,
    Wiebe.
    "lidia" wrote in message
    news:506500
    [email protected]..
    > I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C
    > with LabWIEW for use shared Memory?

  • Shared memory used in Web Dynpro ABAP

    Hi Gurus,
    I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?

    Marek Veverka wrote:
    Hi Gurus,
    >
    > I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?
    To my understanding writing to the database is the safe option. There are no other ways to solve your problem with Shared memory.

  • Using Shared memory

    Hi folks,
    This the first time I use shared memory and my question is:
    Does shmat function attache the segment to the same address in diffrent procsses, in anther word can I use the same pointer in process A and B?
    Thanks

    The issue of alignment is rather tricky.
    shmat(2) may perfectly return you misaligned address, so I'd consider using memory-mapped files instead.
    mmap(2) returns page-aligned memory (unless you specify MAP_FIXED and some weired first parameter), so you may rely further on the compiler to do the alignment for you...

  • Short Dump TSV_TNEW_PAGE_ALLOC_FAILED while using shared memory objects

    Hi Gurus,
    We are using shared memory objects to stor some data which we will be reading later. I have implemented the interfce IF_SHM_BUILD_INSTANCE in root class and using its method BUILD for automatic area structuring.
    Today our developments moved from dev system to quality system, and while writing the data into the shared memory using the methods ATTACH_FOR_WRITE and DETACH_COMMIT in one report. We started getting the run time error TSV_TNEW_PAGE_ALLOC_FAILED.This is raised when the method DETACH_COMMIT is called to commit the changes in the shared memory.
    Everyhting works fine before DETACH_COMMIT. I know that it is happening since the program ran out of extended memory, but I am not sure why it is happening at DETACH_COMMIT call. If excessive memory is being used in the program, this run time error should have been raised while calling the ATTACH_FOR_WRITE method or while filling the root class attributes. I am not sure why it is happening at DETACH_COMMIT method.
    Many Thanks in advance.
    Thanks,
    Raveesh

    Hi raveesh,
    as Naimesh suggested: Probably system parameter for shared memory area is too small. Compare the system parameters in devel and QA, check what other shared memory areas are used.
    Regarding your question, why it does not fail at ATTACH_FOR_WRITE but then on DETACH_COMMIT:
    Probably ATTACH_FOR_WRITE will set an exclusive write lock on the shared memory data, then write to some kind of 'rollback' memory and DETACH_COMMIT will really put the data into shared memory area and release the lock. The 'rollback' memory is in the LUW's work memory which is much bigger as the usual shared memory size.
    This is my assumption - don't know who can verify or reject it.
    Regards,
    Clemens

  • When is shared memory used?

    I understand that two JVM's both running Coherence on the same box will generally but not always communicate through shared memory, rather than via the network.
    Can you indicate under which scenarios this occurs, and by implication, which scenarios it does not occur?

    Hi Andy,
    Coherence 1.x/2.x always uses a network interface (UDP unicast/multicast) for inter-JVM communication. However, the OS network stack may internally use shared memory for intra-machine communication.
    Jon Purdy
    Tangosol, Inc.

  • I want to used shared memory in LabVIEW. I think I can do it using a DLL in C.

    I think I can use shared memory with a DLL in C. But, Can I use some utility included in LabVIEW to do that, without include my DLL?

    Jorge M. wrote:
    > Hello,
    >
    > here's the info. It works.
    >
    > http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000006A1D0000&UCATEGORY_0=_318_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=build+cvi+shared+dll&USEARCHCONTEXT_QUESTION_S=0
    Another one:
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000005BC10000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=Communicating+Between+Built+LV+App&USEARCHCONTEXT_QUESTION_S=0
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Import statement using DATA BUFFER

    Hi All,
    I am using RFC enabled FM using STARTING NEW TASK, We cannot import data from FM back to the program when we use this statement. So I am exporting data into DATA BUFFER in the FM and trying to import data in the main program. Can you please tell me how can I import data from SHARED MEMORY, Below is my code.
    call function 'YPMLR_SITEBAL_DETAILS'
        starting new task 'ID'
        exporting
          s_cyl       = s_cyl-low
          s_lifnr     = s_lifnr-low
          s_lstyp     = s_lstyp-low
        tables
          s_zlocn     = lt_zlocn
          gt_zmlr_mld = gt_zmlr_mld
          gt_zmlr_lp  = gt_zmlr_lp
          gt_zmlr_mlp = gt_zmlr_mlp
          gt_zmc_loc  = gt_zmc_loc.
    "IMPORT e_rand_no TO e_rand_no from MEMORY ID 'RAND'.
    Here is the export statement used in FM,
    EXPORT e_rand_no FROM e_rand_no  TO DATA BUFFER XSTR.

    Hi,
    Check this link for Export to database instead of memory..
    http://help.sap.com/abapdocu/en/ABAPEXPORT_DATA_CLUSTER_MEDIUM.htm
    Import from database instead of memory
    http://help.sap.com/abapdocu/en/ABAPIMPORT_MEDIUM.htm

Maybe you are looking for