Replacing files in comunication between processes

Hello,
I need Java code as a wrapper to run two processes. The first one is a stand alone program that produces some file. The second process, implemented with C API functions through JNI, uses the file as input.
I wonder if it would be possible to avoid to use the file (i.e. to create two threads and make them communicate via memory buffer). Is it possible ? Any other considerations ?
Thanks in advance.

Try to use sockets.
OR if posible make your data producer program write its data to its STD output and make your data consumer program read from its STD input then your java code can direct the producers output to consumers input or you can simply use the pipe line feature of shell

Similar Messages

  • How to keep waiting time between processed messages !!

    Hi Folks,
    I have got one scenario required waiting time between processed messages. The problem as follows !!
    File --> Proxy scenario. I receive 15 messages from sender side (same messages structure) so working with one interfaces. File picking and transforming this message and split into 2 messages. messages are receiving to receiver. I am using BPM with 7,8 steps like receiving step, block , message transformation step , internal block 1 for sender 1, internal block 2 for sender 2.
    All things are working fine, messages are going to receiver properly. But customer requirement is , wait step required between processed messages before sender1. I have put wait step still, PI picks all messages in one shot processing and waiting for 2 minutes, after 2 minutes sending all messages at the same time, this process is not working.
    I have tried with wait step in mapping (Sarvesh) given excellent idea, still PI works the same way.
    Can someone please explain a bit why the messages or not waiting message by message. I am using EOIO with Queue name and file process mode "BY NAME" and I have tried "BY TIME" as well. I have given priority to this Queue. On BPM Queue assignment : One Queue.
    Please I am expecting positive answer !!
    Many Thanks in Advance
    San

    Hi Rudolf Yaskorski ,
    Not sure about your PI release and BPM model, do you create separate process instance for each file, or do you process files collecting them in one single instance? Are you using parallelization within your ccBPM ?
    I am using serialization, I don't think bpm can do Parallization until PI 7.0, but PI 7.11 has got has queue assignment. But I am using one queue. This must be serialization.
    To me it looks like your issue is not in ccBPM but rather more in polling files (as per your post file CC polls all 15 files in one shot). So if you wish to poll the files not at the same time some workaround is required. Possible options you could check out:
    A. Either implement "wait" in your mapping based on file name or other criteria (e.g. directory name). Check out if respective BPM instances are really created at different times.
    I have used wait step in mapping. These 15 messages has to go through one interface. So I am using one interface. But I have checked mapping process time in all messages on receiver system. Shows same timing, even though I put 40000 ms waiting time in mapping.
    B. Try polling different files (or use different directories) with different channels and coordinate starting / stopping of your channels by scheduling availability for each CC in RWB. E.g. you poll file 1 with CC 1. You start 2 minutes later CC 2 and poll file 2. And so on.
    I am not clear about this . On BPM waiting step is working and it keeps wait all messages, which are coming through one interface. Then it releases all messages at the same time.
    I don't know how to resolve this. I have tried with Transport acknowledgment, but all messages are going to reciver system waiting at receiver system in priority queue and processing in EOIO, but taking so long. Rather all messages go and sits in queue, I want to stop messages by message with 2 minutes time gap. How please?
    Kind Regards
    San

  • HELP - Receiving error message "The file has not been processed. An error has occurred which stopped the processing of the PDF"

    Have been using Adobe CreatePDF for many months successfully.
    Only recently over the past few days I have been experiencing errors for periods between 3-5 hours on every attempt to convert/combine files. The error reads: "The file has not been processed. An error has occurred which stopped the processing of the PDF"
    It will stay like this and no amount of refreshing the cache, logging in and out will help. I'm running Firefox 34.0.5, windows 7 64bit. Only solution I've found is to wait until the next day and it seems to be working again. Only explanation i can think of is some unexplained maintenance adobe is running behind the scenes because I'm often working late between 12am - 4am.
    When this problem happens, no file types will convert/combine I've tested JPEG, PDF and word docs to PDF.
    I log in and out of my account, I've cleared my cache, I'm using Firefox 34.0.5 which has always worked in the past I just don't understand how this error is happening for such long periods. Starting to reconsider this product when there are free alternatives to simply combine my PDF docs! please help.
    Tom.

    I am still experiencing this problem. Customer service from Adobe has been abysmal!
    I called Adobe tech support at 2am Australian time ( >1 weeks ago) and got onto a help centre in India. I was walked through all the troubleshooting steps and they remotely accessed my PC and confirmed that It was not my PC that was at fault. They reset my password and dialed into my adobe account from their end to attempt a simple conversion and export to PDF which failed. I was told that this problem is not something they can fix, and is a result of something on the back end of their software.
    They advised me that the problem would be passed to a fixed within 12 hours and they would call and email me to confirm everything was OK. I gave my phone number with (area and country code) my email but have not had any communication back.
    I am extremely dissatisfied, and just want this software to function. Is that too much to ask?
    Note: This problem is happening always now, regardless of time being used.
    Tom

  • Want to Archive(or)Delete the file after finish the processing tht message

    Hi,
       My scenario is File to File  scenario that is working fine. due to some performance reasons i want to Archive the file (in sender communication channel level) after finish the process . that means after genarting the file in the target directory  then only i want to delete the file in the source directory.
    Any solution for this.....
    Please suggest me .
    Regards
    Jain

    Thanks for your suggestions..
      But that (Delete, Archive) option is there in the sender comunication channel level.. i knew that and that is working fine  in my scenario... but  my requirement is i want to delete the file after finished the process.. because some times message was failed in the Mapping level. but i want for some enahanced functionality point of view i want to delete or Archiveing the file after finish the process that means after genarting the file into target directory....
    Any help will be appriciated.
    Regards
    Jain

  • 45 min long session of log file sync waits between 5000 and 20000 ms

    45 min long log file sync waits between 5000 and 20000 ms
    Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
    I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
    The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
    I've run statspack reports on hourly intervals around this event:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And here is a sample while not encountering the issue:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66
    Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
    I welcome any and all feedback.
    Message was edited by:
    acyoung1
    Message was edited by:
    acyoung1

    Lee,
    log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
    I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
    A bit more information from statspack reports:
    Here is a sample while the issue is occuring.
    Snap Id Snap Time Sessions
    Begin Snap: 661 24-Mar-06 12:45:08 87
    End Snap: 671 24-Mar-06 13:41:29 87
    Elapsed: 56.35 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 615,141.44 2,780.83
    Logical reads: 13,241.59 59.86
    Block changes: 2,255.51 10.20
    Physical reads: 144.56 0.65
    Physical writes: 61.56 0.28
    User calls: 1,318.50 5.96
    Parses: 210.25 0.95
    Hard parses: 8.31 0.04
    Sorts: 16.97 0.08
    Logons: 0.14 0.00
    Executes: 574.32 2.60
    Transactions: 221.21
    % Blocks changed per Read: 17.03 Recursive Call %: 26.09
    Rollback per transaction %: 0.03 Rows per Sort: 46.87
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 98.91 In-memory Sort %: 100.00
    Library Hit %: 98.89 Soft Parse %: 96.05
    Execute to Parse %: 63.39 Latch Hit %: 99.87
    Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
    Shared Pool Statistics Begin End
    Memory Usage %: 89.96 92.20
    % SQL with executions>1: 76.39 67.76
    % Memory for SQL w/exec>1: 72.53 63.71
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And this is a sample during "normal" operation.
    Snap Id Snap Time Sessions
    Begin Snap: 671 24-Mar-06 13:41:29 88
    End Snap: 681 24-Mar-06 14:42:57 88
    Elapsed: 61.47 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 716,776.44 2,787.81
    Logical reads: 13,154.06 51.16
    Block changes: 2,627.16 10.22
    Physical reads: 129.47 0.50
    Physical writes: 67.97 0.26
    User calls: 1,493.74 5.81
    Parses: 243.45 0.95
    Hard parses: 9.23 0.04
    Sorts: 18.27 0.07
    Logons: 0.16 0.00
    Executes: 664.05 2.58
    Transactions: 257.11
    % Blocks changed per Read: 19.97 Recursive Call %: 25.87
    Rollback per transaction %: 0.02 Rows per Sort: 46.85
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 99.02 In-memory Sort %: 100.00
    Library Hit %: 98.95 Soft Parse %: 96.21
    Execute to Parse %: 63.34 Latch Hit %: 99.90
    Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
    Shared Pool Statistics Begin End
    Memory Usage %: 92.20 88.73
    % SQL with executions>1: 67.76 75.40
    % Memory for SQL w/exec>1: 63.71 68.28
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66

  • Bug in Open/Create/Replace File

    I found this problem in LabVIEW 2009. The boolean indicator doesn't seem to work properly in the first snippet. It returns TRUE even with a valid path. With some trial and error I found a workaround that will make it work properly for now. I just wire an Error Out after the Open/Create/Replace File (second snippet). Is this a bug?? Am I doing something wrong?? 
    I've got a lot of code that uses this to test to check for existing files and then wire the output to a case statement. I also know there is a Check if File/Folder Exists.vi available. I would hate to spend a lot of time changing all my code if this is a known bug and it will be fixed in the next update. If it is not a know bug, where can I submit this??
    Solved!
    Go to Solution.

    Its not a bug. You have race condition between indicator and close ref.
    Use it in this way:
    Andrey.

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • Windows: Passing Sockets Between Processes?

    I've done a bit of looking around, and for the most part, it seems to be impossible to pass sockets between separate windows processes -- since a socket only exists inside one process, and it's not serializable, etc. Apparently, *nix operating systems will store a socket in file descriptors after a process exits, so it's possible to transfer sockets, but not on a Windows OS.
    So, I just want to ask: am I missing anything? Is there some way to:
    a) transfer sockets between processes (at least within a local network) or
    b) dynamically reload the bytecode of a process, without dropping sockets? IE, some sort of "server copyover-reboot" is what I'm going for.
    So, is it possible, or does Windows just fall behind on this one?

    Understood... but, exactly how in the heck would one go about passing sockets from the parent process? I tried it out a bit, and System.inheritedChannel() always seems to be returning null in the child process, even though the parent waits to close its sockets until after the child has started and invoked the method.
    Edit:
    Runtime.getRuntime().exec(new String[]{"cmd", "/c", "start", "java", "ServerMain","reboot"});
    Would this be causing in issue? This is my execution of the new process, and it first invokes cmd /c start so that it opens in a new cmd window. Would it be sending its channels to that process, and complicating the process of ServerMain's new process inheriting the channels?
    Edit again:
    I'm starting to assume that my Runtime.exec is closing the file descriptors, so it's generally impossible to pass them to the subprocess. The current System.inheritedChannel() seems as if it was made to work with inetd, and not much else.
    Message was edited by:
    cmc5788

  • Difference between Process integration scenario and configuration scenario.

    Hi,
    Please tell me what is difference between Process integration scenarion and configuration scenarion in SAP PI.
    Is Process integration scenario is BPM concept? also what is the advantages over configuration scenario.
    Regards,
    Manigandan

    Hi Manigandan,
    Speaking widely:
    Process integration scenario: there is a ccBPM, like the workflow in ABAP, even when you one inside the PI it's generated a workflow in the ABAP stuck
    Configuration scenario: it is object to link the integration directory and the esr/IR when  you define the logic of your implementation. It isnt mandatory, but you can organize better you develoments if you make an scenario for a "real scenario" like a file to soap integration, you can define it like a configuration scenario.
    Regards.

  • Error 7 occurred at open/create/replace file

    Hello,
    I have searched the forum and could not find any useful information regarding my problem, so hope you may shed some light. MyVI can create a new csv (text) file by using the time to name the file, so every time the new file will be created with a different name and file path is not a relative path. I can run the VI in both development environment and stand along executable without any problem. But, 
    One of my colleagues  cannot run the executable exe on her machine because Error 7 occurred at Open/Create/Replace File in xxxx.vi Possible reason: LabVIEW: File not found. The file might be in a different location or deleted. Use the command prompt or the file explorer to verify that the path is correct.
    My question is why only she cannot run this exe on her machine. I have checked a few different PC in my office and all of them can run this exe without problem, so I am pretty sure the code is fine.
    I have already ask her to do,
    Run the executable as administrator
    Save the new file to other drives (not C: drive)
    but she still cannot run the executable. I dont think this is a premission issue, otherwise, LabVIEW would give a different error, Error 8 I believe.
    Both her PC and my PC is running Windows 7 professional 64-bit. The only difference I can see is that she is based in US and I am based Australia. Does anyone have any idea? Please help.
    Thanks,
    Sherman

    I’m not sure but expecting like below,
    As you said, your creating file name with Time. Make sure file name is correct and there is no special characters (like, / or : ). If your formatting time to string with “Format Date/Time String.vi” (while creating file name), based on UTC format, string will change.
    UTC format specifies if the output string is in Universal Time or in the configured time zone for the computer. If TRUE, date/time string is in Universal Time. The default is FALSE.
    Make sure you have access rights to create files into mentioned folder. Try to create new file manually.
    Munna

  • Error 6 occurred at Open/Creat​e/Replace File in NI_Excel.l​vclass:Sav​e Report to

    I have an application that works on LV60 and when run in LV86 I get the following error:
    Error 6 occurred at Open/Create/Replace File in NI_Excel.lvclassave Report to File.vi->SWF001 Test.vi
    Possible reason(s):
    LabVIEW: Generic file I/O error.
    =========================
    NI-488: I/O operation aborted.
    C:\SWF001 IO Files\1_Single.xls
    The application does:
    New Report.vi
    Excel Get Worksheet.vi
    Excel Easy Table.vi
    Excel Easy Text.vi (4 of them)
    Save Report to File.vi
    Dispose Report.vi
    all in a nice chain (errors and report-in/outs) as is typical
    For some reason I'm getting this error. I'm using it with Excel 2003 SP3 and the spreadsheet contains macros (and I get a prompt "should I enable macros:" (I answer yes), that I didn't get with LV60 and the older version of Excel). This is probably not the problem? but deserves mention.
    The file does exist on the system (and it appears the application is writing to it successfully - though perhaps truncated as the error indicates- i can't tell for sure).
    I can open and save the Excel file independently of LV just fine.
    Any clues?
    Many thanks! -David
    Solved!
    Go to Solution.

    Unfortunately, an Error 6 is a catch all category for all sorts of file IO errors that LV doesn't know how to handle. For example, while it doesn't sound like your problem, trying to access a file that resides on a network drive that isn't currently mounted can generate an Error 6.
    Have you tried saving the file to a different location?
    If you restart your computer and run the VI does it generate the error the very first time you run it?
    Can you post your code - or at least a subset of it that demonstrates the problem?
    Is Excel open when you are running the code?
    Also, as a test try modifying your code so it has a unique name every time you save it.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Error 7 occurred at Open/Creat​e/Replace File in Open Config Data.vi

    I have a .vi that was working fine with Labview 7.0.  I now have upgraded to Labview Professional 8.0, and I am trying to build a stand alone  executable.  When I build the executable, build an installer, and then try to install it on another computer then run it......... I get the "Error 7 occurred at Open/Create/Replace File in Open Config Data.vi" error.  Any ideas?

    Hello,
    In addition to the LabVIEW Help, there is an Application Note on Distributing Applications with the LabVIEW Application Builder and a knowledge base article What Is The Build Specification Feature In Application Builder? that might be helpful.
    Also, here
    is a knowledge base article on what Dennis has described and it
    provides an example to call an extra Strip Path function. Also, instead
    of using a Match Pattern function, you could use the App.Kind property
    node (use Open Application Reference, Property Node: Application ->
    Kind, and Close Reference) to determine if you are running in a
    Run-time system.
    Good luck and best regards,
    Shakhina P.
    Applications Engineer
    National Instruments

  • Error 5 occurred at Open/Create/Replace File in Write spreadsheet String.vi

    Hi everyone,
    can anyone help me with this problem?
    "error 5 occurred at Open/Create/Replace File in Write spreadsheet String.vi "
    I've been using this part of the program for over a year an suddenly this error occures. But not always, mainly at the very beginning of my tests when the file should not be open.
    Info: I'm using a realtime PXI-System. Maybe the amount of data can cause the problem? (about 2MB)
    Grüße
    Meike
    Attachments:
    writeResults.jpg ‏345 KB
    error5.jpg ‏52 KB

    Hi Meike,
    is the file opened by a different program? Do you try to access it by FTP in parallel to your VI?
    You could use basic file functions instead of WriteSpreadsheetFile. That way you could open the file before starting the loop, keep it open all the time and close it once you're finished - with the added benefit of easier error handling…
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Can I share files, especially email, between 2 Macs via iCloud?

    I have 2 Macs - an iMac for business at a fixed location and a Powerbook (which is much older) for traveling.  I want to share files, especially email, between the 2 Macs.  Can I do this using iCloud?

    Some more info is probably necessary.  My Powerbook is NOT using Lion.  In fact, it can't because its processor is too old.  Can I use iCloud with it?
    Also I use Entourage (Microsoft Office) for Mail and that's where I really need to share data, emails, etc.

  • How do I share files on iCloud between different apps?

    Hi,
    how do I share files on iCloud between different apps?
    I would like to edit simple plain text files on my Mac and my iPad using different apps. They don't see the content of each other. How do I fix this?
    example:
    when I create a .txt files:
    Byword on Mac syncs nicely with Byword on iOS.
    Ulysses 3 on Mac syncs nicely with Daedalus Touch.
    BUT, cross-sync is not possible.... :/
    How can I change this? - I would like to work with Byword on my Mac and Daedalus on the go.
    Thanks in advance!

    Cross-sync only works if an app on the iDevice allows it.  For example, some apps (editors) sync with Dropbox (icloud usually isn't the thing to use) but only in their own folders. 
    If you save a file from the mac to it's own folder, A, on dropbox and on the device you use a different app that uses its own folder, B, then you won't be able to reach folder A on the device. 
    One thing to look for in editors that support Dropbox (most do, on a device - clearly all do on a mac) is whether they access dropbox using the root folder.  From there you can drop down to any subfolder and reach the files you want.

Maybe you are looking for

  • Error while deploying RPD

    Hi, Trying to deploy RPD thru EM, following standard procedure which I have been doing for long enough but now hit this issue, there is not much that I can understand in this any help would be appreciated: Message     Supplementary information regard

  • Macbook logic board, restart, exchange

    hi everyone, as of last weekend, i became a happy owner of a black macbook....that is until when the weekend ended. that's when i discovered these forums. unfortunately, i became a part of a 'growing minority' of ppl who have the random restarting pr

  • Variable in the report

    Hi BW Experts, i created 2 queries on same infoprovide and have varaibles also . i need club these reports in to one report by using of  WAD . i had drag and drop the 2 tables in wad . i assignes one tables in to one query and same i assigned other q

  • URGENT : Regarding External Table

    Hi, We have some external tables in our production environment. In unix level we have two users 1) ORACLE (Primary Group DBA, Secondary group RELEASE) 2) PROD (Primary Group RELEASE). Now the log file directory of external table owned by PROD user an

  • IPad Air is getting crashed

    i'm iPad air 16gb. I had upgraded my iPad to iOS 8 now I am running latest iOS on my device. The device is being crashing from when I had upgraded my device to iOS 8. GEtting blue screen and the device is restarting.