Sharing memory between executables?

I'm working on a set of three programs: one camera interface and two vision processing routines.  Currently, the camera interface runs independently of the two vision processing routines, and the images are simply passed through Labview's shared memory.  In other words, the interface just runs and creates an image with a known name, and the other two routines use the name to read in that image and make their own local copy to operate on.
However, eventually these programs need to be compiled to executables, where this method will no longer work.  The two programs need to remain separate, so I can't build them into the same executable.  They will both be run on the same computer, so I was wondering if there was another way to write to and read from the computer’s memory that will still work for Labview-built executables.

TCP should be a perfectly acceptable solution. It's a common interface for exchanging data between processes, even on the same machine. The code for sending data really isn't that complicated. Just take a look at some of the TCP shipping examples in LabVIEW.
You could use something like ActiveX if you're running Windows for interprocess communication between the LabVIEW-built exe's, but that would add a bit of overhead and complexity to the matter. Also, that would really only allow you to access a subset of VI Server methods, such as setting control values. Sending and receiving data wouldn't be all that quick.
Another alternative would be writing the data to a shared file, but that makes it hard to stream data continuously.
Jarrod S.
National Instruments

Similar Messages

  • Shared memory between LV and a DLL

    Hello
    I am building a DLL in Visual .NET. It is called by LV in a .vi
    The DLL is counting some stuffs over a network, and I d'like to see the
    counter status in the vi, dynamically. This means that when the
    variable is changed is the DLL, the LabView variable changes too.
    I tryed to do this using shared memory, and passing a pointer to the
    DLL. Unfortunately, it seems that the value isn't updated while the DLL
    is running.
    Please tell me the way to do this. I read about global variable, but it isn't much better.
    Thanks versy much !
    Fabien
    PS: I am using LabView 7.1 and VisualStudio .NET 7.0.9500  in C language.
    Message Edité par faquin le 08-02-2005 07:45 AM

    Since you say the DLL does not finish executing, you will need some method where both programs can communicate.
    Here are a couple of suggestions (haven't tried any of them):
    You can try using TCP (search the example finder and this site for examples and tutorials).
    You can try opening a connection to the LV VI server from the DLL to set the value of the indicator.
    If the DLL is threadsafe, maybe you can have another DLL which will serve as a buffer - your DLL will call it to put the data in and LV will call it to extract the data (assuming LV can call 2 threadsafe DLL functions at the same time).
    Try to take over the world!

  • Trying to create multiple state machines out of same VI to monitor each cFP channel results in what seems to be shared memory between machines

    Hi,
    I'm trying to create a state machine which monitors a single channel on a field point. I want to monitor up to 64 channels at a time, each one with different instances of the same VI. The VI itself needs to maintain information mainly through feedback nodes both in its own block diagram and subVIs within. 
    All instances of this VI will be executing inside a main loop as they do not execute loops of their own as this would create 64 threads on the cFP-2200 we're using which I believe would be too much.
    My attempted solution to all of this was to make the Channel Monitoring VI along with all SubVIs reentrant and "pre-allocate" for each instance. This does not seem to help as each VI seems to maintain the state of the one which ran before. Maybe I'm missing a step?
    Is there a better way to approach this problem without writing a separate VI for each channel and the maintenance headache that would cause?

    Hi Ben,
    I actually read your document for another puzzle and it worked well, I think largely because it involved subVIs which were meant as separate threads. In that case, I used methods to alter inputs while the VI was "running."
    In this case, as you know, my SubVIs are not looping inside, they are instead single calls but they do need to maintain the state at the end of their previous call, adding to the data they're tracking after they are called subsequent times. I tried the "call by reference" to call them each time. Below is a screenshot of the VI used to create the occurances and a SubVI used to execute each occurance, this subVI is embedded inside a while loop which I did not show here:
    In running my tests, there still seems to be some data sharing of internal variables and feedback nodes between the SubVIs I'm calling, which I do not want. Am I approaching this in the correct way? Is what I'm trying to do even possible?

  • How do I setup shared variables between executables created in sepparate projects

    Hello,
    I have several sepparate projects with their own respective executable files and I would like to be able for these executable files to all share the same variable (one program controls the value of the variable, while the others read from it).
    I got this setup to work on my personal computer (by being able to access variable manager, etc), but I need to deploy these executables on different computers that don't have the labview development program. What steps do I need to do in order for me to be able to put these executables on any computer (I'm assuming I need to setup a path for the shared variable that is always in the same folder, etc)
    Thanks
    Vlad
    Solved!
    Go to Solution.

    Hi Vlad,
    I think this article may answer some of your questions regarding shared variables in deployed applications.
    http://zone.ni.com/devzone/cda/tut/p/id/9900
    It sounds like you already have your executables built, but this article may answer some questions about deploying them to other machines.
    http://zone.ni.com/devzone/cda/tut/p/id/3303
    Jeff S.
    National Instruments

  • No space left in shared memory error when executing a query

    Hi,
    When I executed a query, it showed the results. But when I drilled down it by material, it showed the following error message.
    Error An exception with the type CX_SY_EXPORT_NO_SHARED_MEMORY occured
    Error No space left in shared memory
    Is there anyone know what is the reason and how to resolve?
    Many Thanks
    Jean

    Your report got too big and you ran out of memory on the server.
    Run it for a smaller data set (e.g., a few months instead of a year) to avoid this problem.
    Hope this helps...
    Bob

  • Reentrant shared clones between instances Memory Usage

    Hi guys. 
    I have a question on the Reentrant shared clones between instances. 
    I understand the concept of Reentrant shared clones between instances, Next clone get into the memory after a clone get out the memory. 
    My question is that how many instances of clone can run simultaneously?  How much memory is allocate for Reentrant shared clones between instances?
    If I use call Vi by reference as show in the the figure, will it open more than one VI and keep it in the memory. 

    You might be just a bit confussed about clones, clone pools, and that option 0x80 setting. 
    Your code as written will cause an error to subsequent calls unless the vi has finished first.
    Combining the 0x80 flag with ox40 prepares the vi to be called reenterant so each call to the modified vi will casue one dataspace to be checked out of the clone pool.  The default is one dataspace per core in the clone pool, that can be increased (but not decreased) with the populate async call pool method. 
    Each time that sub vi runs another dataspace is checked out of the clone pool.  If none are avalable additional dataspaces are created on the fly. this takes time you will notice. 
    The sub-vi is responsible for returning the dataspace to the clone pool when it finishes.
    As far as how many can be in memory at one time.  how much memory can your application access and how much memory does your vi's dataspace take?
    Jeff

  • Shared memory:  apache memory usage in solaris 10

    Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!

    a) How or why does solaris choose to share memory
    between processes
    from the same program invoked multiple times
    if that program has not
    been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
    Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
    If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
    Simply: if we run pmap / ipcs we can see a
    shared memory reference
    for our oracle database and ldap server. There
    is no entry for apache.
    But the total memory usage is far far less than
    all the apache procs'
    individual memory totted up (all 100 of them, in
    prstat.) So there is
    some hidden sharing going on somewhere that
    solaris(2.9) is doing,
    but not showing in pmap or ipcs. (virtually
    no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
    b) Under solaris 10, each apache process takes up
    precisely the
    memory reported in prstat - add up the 100
    apache memory details
    and you get the total RAM in use. crank up the
    number of procs any
    more and you get out of memory errors so it
    looks like prstat is
    pretty good here. The question is - why on
    solaris10 is apache not
    'shared' but it is on solaris 9? We set up
    all the usual project details
    for this user, (jn /etc/projects) but I'm
    guessing now that these project
    tweaks where you explicitly set the shared
    memory for a user only take
    effect for programs explicitly coded to use
    shared memory , e.g. the
    oracle database, which correctly shows up a
    shared memory reference
    in ipcs .
    We can fire up thousands of apaches on the 2.9
    system without
    running out of memory - both machines have the
    same ram !
    But the binary versions of apache are exactly
    the same, and
    the config directives are identical.
    please tell me that there is something really
    simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
    Darren

  • SHARED MEMORY AND DATABASE MEMORY giving problem.

    Hello Friends,
    I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
    I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
    This IMPORT command is working perfect in foreground. But  it is not working in background.
    So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY.  But no use. Still background is giving problem.
    When I remove VIA JOB  parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
    pls find the below code of mine.
    option1
    EXPORT TAB = ITAB
           TO DATABASE indx(Z1)
                FROM   w_indx
                CLIENT sy-mandt
                ID     'XYZ'.
    option2
    EXPORT ITAB   FROM ITAB
      TO SHARED MEMORY indx(Z1)
      FROM w_indx
      CLIENT sy-mandt
      ID 'XYZ'.
       SUBMIT   ZPROG2   TO SAP-SPOOL
                      SPOOL PARAMETERS print_parameters
                       WITHOUT SPOOL DYNPRO
          *_VIA JOB name NUMBER number*_
                       AND RETURN.
    ===
    Hope every bidy understood the problem.
    my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
    Thanks
    Raghu

    Hi.
    You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
    When you call your report using VIA JOB, a new session is created.
    Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
    Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
    Hope it was helpful,
    Kind regards.
    F.S.A.

  • Sharing resources between parent FDO and children PDO

    Hello,
    I 'm developping a WDM driver for a FPGA that embeds several uarts and a CAN controler in one PCI slot.
    I use the DDK Toaster sample as a basis.
    Has someone already shared resources between  parent FDO and children PDO ?
    One way will be to export a direct-call interface between tha parent and the children. Is there something better to get the interrupt trigger in the child and the memory as a direct access ?
    Thanks
    Marco

    Hi Doron,
    here is the full debug output................
    ADDITIONAL_DEBUG_TEXT:  
    You can run '.symfix; .reload' to try to fix the symbol path and load symbols.
    MODULE_NAME: Nitin
    FAULTING_MODULE: fffff8000324a000 nt
    DEBUG_FLR_IMAGE_TIMESTAMP:  549ced55
    EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.
    FAULTING_IP:
    +501cfc0
    00000000`00000000 ??              ???
    EXCEPTION_RECORD:  fffff88003b9a9c8 -- (.exr 0xfffff88003b9a9c8)
    ExceptionAddress: 0000000000000000
       ExceptionCode: c0000005 (Access violation)
      ExceptionFlags: 00000000
    NumberParameters: 2
       Parameter[0]: 0000000000000008
       Parameter[1]: 0000000000000000
    Attempt to execute non-executable address 0000000000000000
    CONTEXT:  fffff88003b9a220 -- (.cxr 0xfffff88003b9a220;r)
    rax=fffffa8022124c40 rbx=0000000000000000 rcx=0000000000000000
    rdx=fffff88003b9ac58 rsi=fffff88000fc60c0 rdi=fffff88003b9acf8
    rip=0000000000000000 rsp=fffff88003b9ac08 rbp=000000000000000c
     r8=0000000000000065  r9=0000000000000003 r10=4679726575516f64
    r11=000000000000000c r12=fffff88000fc92c0 r13=0000000000000312
    r14=0000000000000000 r15=0000000000000000
    iopl=0         nv up ei ng nz na pe nc
    cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010282
    00000000`00000000 ??              ???
    Last set context:
    rax=fffffa8022124c40 rbx=0000000000000000 rcx=0000000000000000
    rdx=fffff88003b9ac58 rsi=fffff88000fc60c0 rdi=fffff88003b9acf8
    rip=0000000000000000 rsp=fffff88003b9ac08 rbp=000000000000000c
     r8=0000000000000065  r9=0000000000000003 r10=4679726575516f64
    r11=000000000000000c r12=fffff88000fc92c0 r13=0000000000000312
    r14=0000000000000000 r15=0000000000000000
    iopl=0         nv up ei ng nz na pe nc
    cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010282
    00000000`00000000 ??              ???
    Resetting default scope
    CUSTOMER_CRASH_COUNT:  1
    DEFAULT_BUCKET_ID:  WIN7_DRIVER_FAULT
    BUGCHECK_STR:  0x7E
    CURRENT_IRQL:  0
    ANALYSIS_VERSION: 6.3.9600.17029 (debuggers(dbg).140219-1702) amd64fre
    LAST_CONTROL_TRANSFER:  from fffff8800b9eb091 to 0000000000000000
    STACK_TEXT:  
    fffff880`03b9ac08 fffff880`0b9eb091 : fffff880`0b9ec9a0 00000000`00000001 00000000`00000000 fffff880`009cf180 : 0x0
    fffff880`03b9ac10 fffff880`0b9ec9a0 : 00000000`00000001 00000000`00000000 fffff880`009cf180 00000000`00000001 : Nitin+0x4091
    fffff880`03b9ac18 00000000`00000001 : 00000000`00000000 fffff880`009cf180 00000000`00000001 00000000`00000000 : Nitin+0x59a0
    fffff880`03b9ac20 00000000`00000000 : fffff880`009cf180 00000000`00000001 00000000`00000000 00000000`03060001 : 0x1
    FOLLOWUP_IP:
    Nitin+4091
    fffff880`0b9eb091 ??              ???
    SYMBOL_STACK_INDEX:  1
    SYMBOL_NAME:  Nitin+4091
    FOLLOWUP_NAME:  MachineOwner
    IMAGE_NAME:  Nitin.sys
    STACK_COMMAND:  .cxr 0xfffff88003b9a220 ; kb
    BUCKET_ID:  WRONG_SYMBOLS
    FAILURE_BUCKET_ID:  WRONG_SYMBOLS
    ANALYSIS_SOURCE:  KM
    FAILURE_ID_HASH_STRING:  km:wrong_symbols
    FAILURE_ID_HASH:  {70b057e8-2462-896f-28e7-ac72d4d365f8}
    Followup: MachineOwner

  • Shared Memory Provider: Timeout error [258]

    Hi All,
    Hopefully there is somebody that can help me...
    When running the etl I'm getting the error: <SSIS Task>: Shared Memory Provider: Timeout error [258] ; followed by the message "Communication link failure".
    What is special about this message that it happens on a SQL Execute task (random task) and the Timeout is after 2 minutes.
    When executing the packages separatly it is working fine. The SQL Tasks that are failing are also quit heavy, but reasonable and takes between >2min and 10 - 15 min. Statements are stored procedures that puts an index on 3 mil. records or update
    statements,...
    I had a look to all my (SSIS-etl) timeouts and they have the default value 0, the "remote query timeout" of the server is set to 10 minutes. According to me, these are the only one that exists?
    There are 2instances on the server each instance has 24GB allocated, the server has 64 in total. Also when the etl runs (that results in an error) no other etl is running on the 2 instances. I'm working with the oledb \sql server native client 11.0 provider:
    SQLNCLI11.1.
    It is frustrating because I don't have a clear error message. Maybe there or other places to look? I had a look on the application log & sql server log but it did not made me any wiser...
    Any help is appriciated,
    Bram

    This is a part of the sql error log at the same time of the error, I'm not sure if it's related but you never know:
    Date,Source,Severity,Message
    04/01/2014 20:00:21,spid14s,Unknown,last target outstanding: 358<c/> avgWriteLatency 20
    04/01/2014 20:00:21,spid14s,Unknown,average throughput:   5.08 MB/sec<c/> I/O saturation: 5995<c/> context switches 14539
    04/01/2014 20:00:21,spid14s,Unknown,FlushCache: cleaned up 72812 bufs with 3099 writes in 112026 ms (avoided 476 new dirty bufs) for db 9:0
    04/01/2014 19:53:56,spid14s,Unknown,last target outstanding: 708<c/> avgWriteLatency 33
    04/01/2014 19:53:56,spid14s,Unknown,average throughput:  35.88 MB/sec<c/> I/O saturation: 25622<c/> context switches 43694
    04/01/2014 19:53:56,spid14s,Unknown,FlushCache: cleaned up 640748 bufs with 25437 writes in 139511 ms (avoided 59488 new dirty bufs) for db 9:0
    04/01/2014 19:44:13,spid14s,Unknown,last target outstanding: 682<c/> avgWriteLatency 75
    04/01/2014 19:44:13,spid14s,Unknown,average throughput:  55.22 MB/sec<c/> I/O saturation: 24846<c/> context switches 43655
    04/01/2014 19:44:13,spid14s,Unknown,FlushCache: cleaned up 646031 bufs with 25310 writes in 91397 ms (avoided 118 new dirty bufs) for db 9:0
    04/01/2014 18:34:03,spid14s,Unknown,last target outstanding: 194<c/> avgWriteLatency 16
    04/01/2014 18:34:03,spid14s,Unknown,average throughput:   9.68 MB/sec<c/> I/O saturation: 4396<c/> context switches 8644
    04/01/2014 18:34:03,spid14s,Unknown,FlushCache: cleaned up 78398 bufs with 3367 writes in 63280 ms (avoided 77538 new dirty bufs) for db 10:0

  • Sharing information between tiled view and view bean

    I have come across one more problem. I have a search result
    page. This search result page has a static text field and a repeated group
    (I simplified the page description, for explanation purpose). In the NetD
    implementation they are maintaining a page level attribute (say boolean
    haveSenisitiveCustomers) and setting this attribute in the
    afterDataObjectExecute event. (This data object is associated with the
    repeated). In end display event of static text field, they are displaying
    message say " due to Registration type, not all customers meeting the
    criteria are listed") if the haveSensitiveCustomers flag is set.
    After Migration, the boolean flag and static Text Field movedto
    SearchResultViewBean and afterDataObjectExecuteEvent has moved to
    TiledViewBean. Actual processing done is more complicated than this
    explanation. However, it boils down to sharing information between
    tiledViewBean and its parent bean. How do we achieve this in the Migrated
    Application?
    One way is to add getter methods in the child tiled view bean( to
    access in parent view bean, call getRepeated1 and cast to the actual type
    and invoke the get methods)Probably the typical solution (I say typical because I don't yet know) will
    be to do what you suggest: provide methods between views that can be used to
    determine the state needed for processing like this.
    However, let me alert you to something that's different than ND, and which
    may cause you some trouble. In ND, all retrieving DataObjects associated
    with a page executed at one time, and the afterDataObjectExcecuteEvent fired
    before any display processing began. However, in JATO, tiled views are
    independent objects, and any models associated with them only execute when
    the tiled view is first displayed.
    Therefore, if the static test field you refer to appears in the page before
    the tiled view, then the tiled view will not have executed its associated
    model before the static text field is rendered. You will never see the
    static text field display the text you want because you won't have the
    information at display time. (If the text field appears after the repeated,
    then it's not problem, as the display of the tiled view will have executed
    the associate model before the field displays.)
    The solution is to manually reference the tiled view and its associate model
    before they would normally execute. You would execute the model and set the
    tiled view's setAutoRetrieveEnabled() to false to prevent it from executing
    the model a second time. Perhaps the easiest thing to do would be this:
    beforeStaticTextDisplay(...)
    // Force the tiled view to execute the associated model
    Repeated1TiledView tiledView=
    (Repeated1TiledView)getChild("Repeated1");
    tiledView.beginDisplay();
    tiledView.setAutoRetrieveEnabled(false);
    Although the beginDisplay() method will be executed twice in this case (once
    deliberately, above, and later during actual display), there should be no
    overhead. The beginDisplay() method doesn't do anything anyway except
    execute associated auto-retrieving models and fire the
    afterAllModelsExecute() event.
    Mike, do you concur or have any comments?
    This explanation rests on the understanding of a number of other subjects,
    some of which you may not be fully familiar with. Feel free to ask further
    questions about this explanation.
    Todd
    Todd Fast
    Senior Engineer
    Sun/Netscape Alliance
    todd.fast@e...

    I have come across one more problem. I have a search result
    page. This search result page has a static text field and a repeated group
    (I simplified the page description, for explanation purpose). In the NetD
    implementation they are maintaining a page level attribute (say boolean
    haveSenisitiveCustomers) and setting this attribute in the
    afterDataObjectExecute event. (This data object is associated with the
    repeated). In end display event of static text field, they are displaying
    message say " due to Registration type, not all customers meeting the
    criteria are listed") if the haveSensitiveCustomers flag is set.
    After Migration, the boolean flag and static Text Field movedto
    SearchResultViewBean and afterDataObjectExecuteEvent has moved to
    TiledViewBean. Actual processing done is more complicated than this
    explanation. However, it boils down to sharing information between
    tiledViewBean and its parent bean. How do we achieve this in the Migrated
    Application?
    One way is to add getter methods in the child tiled view bean( to
    access in parent view bean, call getRepeated1 and cast to the actual type
    and invoke the get methods)Probably the typical solution (I say typical because I don't yet know) will
    be to do what you suggest: provide methods between views that can be used to
    determine the state needed for processing like this.
    However, let me alert you to something that's different than ND, and which
    may cause you some trouble. In ND, all retrieving DataObjects associated
    with a page executed at one time, and the afterDataObjectExcecuteEvent fired
    before any display processing began. However, in JATO, tiled views are
    independent objects, and any models associated with them only execute when
    the tiled view is first displayed.
    Therefore, if the static test field you refer to appears in the page before
    the tiled view, then the tiled view will not have executed its associated
    model before the static text field is rendered. You will never see the
    static text field display the text you want because you won't have the
    information at display time. (If the text field appears after the repeated,
    then it's not problem, as the display of the tiled view will have executed
    the associate model before the field displays.)
    The solution is to manually reference the tiled view and its associate model
    before they would normally execute. You would execute the model and set the
    tiled view's setAutoRetrieveEnabled() to false to prevent it from executing
    the model a second time. Perhaps the easiest thing to do would be this:
    beforeStaticTextDisplay(...)
    // Force the tiled view to execute the associated model
    Repeated1TiledView tiledView=
    (Repeated1TiledView)getChild("Repeated1");
    tiledView.beginDisplay();
    tiledView.setAutoRetrieveEnabled(false);
    Although the beginDisplay() method will be executed twice in this case (once
    deliberately, above, and later during actual display), there should be no
    overhead. The beginDisplay() method doesn't do anything anyway except
    execute associated auto-retrieving models and fire the
    afterAllModelsExecute() event.
    Mike, do you concur or have any comments?
    This explanation rests on the understanding of a number of other subjects,
    some of which you may not be fully familiar with. Feel free to ask further
    questions about this explanation.
    Todd
    Todd Fast
    Senior Engineer
    Sun/Netscape Alliance
    todd.fast@e...

  • No shared memory

    hi
    in BW PRD while executing a report we are getting an error
    error  message  " AN EXCEPTION WITH TYPE CX_SY_EXPORT_NO_SHARED_MEMORY  OCCURED, NO  SPACE  LEFT IN SHARED MEMORY"
    Edited by: sarat chandra bomma reddy on Apr 1, 2008 8:45 AM

    At OS level run :
    sappfpar check pf=<instance profile name>
    and correct the values mentioned in the instance profile as suggested in the error list.
    or you need to increase the Shared pool size in the DB profile.

  • Question on use of shared memory objects during CIF executions

    We have a CIF that runs in background via program RIMODACT that is invoked from our external job scheduler.  (The schedulere kicks off a job - call it CIFJOB - and the first step of this job executes RIMODACT.)
    During the execution of RIMODACT, we call a BAdI (an implementation of SMOD_APOCF005.)
    In the method of this BAdI, we load some data into a shared memory object each time the BAdI is called. (We create this shared memory object the first time the BAdI is called.)
    After program RIMODACT finishes, the second step of CIFJOB calls a wrapper program that calls two APO BAPI's.  
    Will the shared memory object be available to these BAPIs?
    Reason I'm asking is that the BAPIs execute on the APO app server, but the shared memory object was created in a CIF exit called from a program executing on the ECC server (RIMODACT).
    Edited by: David Halitsky on Feb 20, 2008 3:56 PM

    I know what you're saying, but it doesn't apply in this case (I think.)
    The critical point is that we can tie the batch job to one ECC app server.  In the first step of this job (the one that executes RIMODACT to do the CIF), we build the itab as an attribute of the "root" shared memory object class.
    In the second step of the batch job, we attach to the root class we built in the first step, extract some data from it, and pass these data to a BAPI that we call on the APO server.  (This is what I meant by a "true" RFC - the APO BAPI on the APO server is being called from a program on the ECC server.)
    So the APO BAPI never needs access to the ECC shared memory object - it gets its data passed in from a program on the ECC server that does have access to the shared memory object.
    Restated this way, is the solution correct ???

  • Propagate data in Shared memory.

    Hello Gurus,
    Need some help regarding Propagating data between application server in shared memory.
    There is a button in SHMM to propagate date between various app. server.
    Question: What are the configuration steps to distribute data.
    Is there any document for the same.
    Regards,
    Abhi.

    You might find this useful....look at the advanced features section for method calls for propagation....SHMA/SHMM and INDX-like tables are cool, huh?
    [http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/98dedf90-0201-0010-7790-b299be258b63?quicklink=events&overridelayout=true]

  • [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL

    Hi All,
    I am running an SSIS solution that runs 5 packages in sequence.  Only one package fails because of the error:
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I have added myself to the performance counters group. 
    I am running windows 7 with SSIS 2008.
    Any ideas would be appreciated.  I have read that some have disabled the warning, but I cannot figure out how to disable a warning. 
    Thanks.
    Ivan

    Hi Ivan,
    A package would not fail due the warning itself, speaking of which means the account executing it is not privileged to load the Perf counters, and should thus be safely ignored.
    To fix visit: http://support.microsoft.com/kb/2496375/en-us
    So, the package either has an error or it actually runs.
    Arthur My Blog

Maybe you are looking for

  • Dropbox syncing conflicts iwork 2013?

    Hello all, I have been using dropbox to sync and share files with others for years. But recently since I updated to iwork 2013 from 09 version, I started to experience a repeated problem, which is very annoying. While I am working on a pages(or numbe

  • How do I sync my iPhone 4 calendar to my mac laptop

    I am trying to sync my iphone 4 calendar to my mac pro laptop with no success.  Does anyone have this problem or instructions on how to make it work.

  • Lync Application and Desktop Sharing - Restrict remote access/Telnet

    I have a customer and they are paranoid about using Lync application/desktop sharing which could potentially enable remote users from getting into their internal IT systems.  They are asking if we could restrict application/desktop sharing specifical

  • WARNING:Oracle process running out of OS kernel I/O resources

    Hi! I am getting below warning in the dbwr trace files almost daily at different times: WARNING:Oracle process running out of OS kernel I/O resources We are using : SuSE Linux Enterprise Edition 10 Sp 2 OS Kernel --> 2.6.16.60-0.39.3-smp Oracle Datab

  • Hiding items in system form by changing record

    Hi, I have on system form textbox binded to UDF. If user change the record, the textbox is not showing. I know about possibility to catch events like form_load, resize and data_load where I can set visible to true to this textbox, but my question is,