Process buffer size

Hello. I have a problem in my app described as follows.
I need to create a process which is a C++ server program that receives text. I create the process in my java class and open a socket to communicate with the C++ app. The problem is that after i send a specific ammount of data the server doesnt respond. If i execute the server outside java (cmd) then there is no problem. After looking at the process api it seems that there is a bug concerning creating processes via java runtime.exec.
Is there a way arround this?? I cant use JNI... I must use a socket and create the instance in my java class

After getting the process inputstream (readline) the
problem was solved. It seems that the input stream
was getting full or something. Anybody care to
explain?? Why do i need to care about the processes
inputstream if i use a socket to connect to the
process and not through the process
outputstream/inputstreamBecause the process is writing to stdout and stderr and the buffers get full. The process will then halt.
Kaj

Similar Messages

  • Process Buffer vs. I/O Buffer

    I'm having issues with my current song where Logic sometimes creates very glitchy-sounding audio when automation is playing back on EXS-24 tracks. The glitching comes and goes. But rather than describe the problems (because they're quite numerous), I'm thinking that maybe it's time for me to make some adjustments to my Process Buffer and maybe even my I/O buffer settings in order to alleviate these audio glitching problems.
    But as I think about it, I'm not sure I understand how the Process Buffer functions in relation to the I/O Buffer, and vice versa. Can somone 'splain?
    : - )

    OK... more news, FWIW... The plot gets thicker...
    To fill in the blanks:
    These dynamic-automation-creates-glitching/ratcheting-in-the-audio problems are occuring as I'm working on a scoring cue, so I have a movie window open. My I/O Buffer = 64, as it has been now nearly two years. Process buffer size is set to "small", and as I'm finding, setting it medium or large has no influence on whether or not I hear the glitching.
    There's still a sustain pedal issue, but I'm still working on that. I think it has to do with controller chasing, but anyway...
    So I figure that somehow I'm taxing the system, hence, glitching. How to "unburden" the processors? OK... turning off global tracks (video thumbnail) makes no difference. Neither does making the movie window size smaller. Neither does making the video thumbnail size lower. Neither does making the video cache smaller or larger.
    Closed my environment window on monitor #2 so that the moving faders wouldn't have to be rendered. Made no difference.
    Then I discovered something... if I close the movie window, the glitching went away! Re-open movie, glitching comes back immediately.
    But I have to see the film, right? So... what else to do? That brings me back to the (as best I can figure) lone wildcard -- the I/O buffer.
    As I said before, I've used a buffer size of 64 for a long time. I thought the Quad PPC could get away with that.
    Nope.
    So I set it to 1024, and O....M....G....
    With the video window open, no glitching!!!
    AND... suddenly, everything sounded 1,000,000 times better!!!!!!!!!
    I worked my way down to 512, then 256. Turns out that I get the same huge improvement to the audio at 256 and no more glitching! It's like a whole different machine. Fidelity? Improved! Imaging? Improved! Clarity of my Space Designers? Improved!
    So...
    As of now, it seems like it's an I/O buffer size issue.

  • Exceeds data buffer size discarding this snmp request

    Morning
    Cisco Prime LMS 4.2.3 is sending SNMP request too big for asa interface buffer.
    LMS is running on Windows server
    incoming SNMP request (528 bytes) from IP address x.x.x.x  Port  50592  Interface "inside" exceeds data buffer size, discarding this SNMP  request.
    212005: incoming SNMP request (%d bytes) from %s exceeds data buffer size, discarding this SNMP request.
    It is very much like this error
    Error Message    %PIX-3-212005: incoming SNMP request (number bytes) on interface
    interface_name exceeds data buffer size, discarding this SNMP request.
    Explanation    This is an SNMP message. This message reports that the length of the incoming SNMP  request, destined for the firewall, exceeds the size of the internal data buffer (512 bytes) used for  storing the request during internal processing; therefore, the firewall is unable to process this request.  This does not affect the SNMP traffic passing through the firewall via any interface.
    Recommended Action    Have the SNMP management station resend the request with a shorter length,  for example, instead of querying multiple MIB variables in one request, try querying only one MIB  variable in a request. This may involve modifying the configuration of the SNMP manager software.
    how do I change the SNMP request size in LMS?
    I can only find the following that might be an option
    http://blogs.technet.com/b/mihai/archive/2012/05/14/reducing-the-maximum-number-of-oids-queried-in-a-single-batch-in-om-2012.aspx
    any thoughts on the matter would be appreciated
    just using default settings with snmpv3

    Bug in lms 4.2.3
    CSCtj88629            Bug Details
    SNMP packet size requests from LMS is too large
    Symptom:
    LMS sends more than 512 SNMP requests to the FWSM, so it rejects the requests.
    Conditions:
    This occurs with FWSM and ASA's.
    Workaround:
    None.
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCtj88629

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • Determining the max allowable buffer size

    I am working on a program which works with very large amounts of data. Each job that is processed requires the program to read through as many as several hundred files, each of which has tens of thousands of values. The result of the operation is to take the data values in these files, rearrange them, and output the new data to a single file. The problem is with the new order of the data. The first value in each of the input files (all several hundred of them) must be output first, then the 2nd value from each file, and so on. At first I tried doing this all in memory, but for large datasets, I get OutOfMemoryErrors. I then rewrote the program to first output to a temporary binary file in the correct order. However, this is a lot slower. As it processes one input file, it must skip all around in the output file, and do this several hundred times in a 60+ MB file.
    I could tell it to increase the heap size, but there is a limit to how much it will let me give it. I'd like to design it in a way so that I could allocate as big of a memory buffer as I can, read through all the files and put as much data in as it can into the buffer, output the block of data to the output file, and then run through everything again for the next block.
    So my question is, how do I determine the biggest possible byte buffer I can allocate, yet still have some memory left over for the other small allocations that will need to be done?

    The program doesn't append, it has to write data to the binary file out of order because the order that it reads data from the input files is not in the order that it will need to be output. The first value from all of the input files have to be output first, followed by the second value from every input file, and so on. Unless it is going to have several hundred input files open at once, the only way to output it in the correct order is to jump around the output file. For example, say that there are 300 input files with double values. The program opens the first input file, gets the first value and outputs at position 0. Then it reads the second value and must write it to position 300 (or byte position 8*300 considering the size of a double). The third value must go to position 600, and so on. That is the problem, it must output the values in a different order than it reads them.
    Without giving any VM parameters to increase the heap size, Java will let me create a byte buffer of 32MB in size, but not much more. If the data to be output is more than 32MB, I will need to work on the data in 32MB chunks. However, if the user passes VM parameters to increase the heap size, the program could create a larger buffer to work on bigger chunks and get better performance. So somehow I need to find out what a reasonable buffer size is to get the best performance.

  • Unable to pre-process buffer before tranmission.  Error code(12/4154)

    Hi,
    I had a problem with tuxedo10 over suse 10 enterprise 64bit
    114705.hermes!?proc.19294.337377072.0: LIBTUX_CAT:6031: ERROR: Unable to pre-process buffer before tranmission. Error code(12/4154)
    When i call the first time the domain A at domain B the process run, but in the second time the message is LIBTUX_CAT:6031.
    help my with that case.
    Javier Claros

    Hi :
    tuxedo@hermes:/home/sistemas/platinum/LIBPRE> more prbube.v
    VIEW prbube
    #type cname fbname count flag size null
    - PRBUBE_TIP_ID PR_TIP_IDf 1 - 4 -
    - PRBUBE_NUM_ID PR_NUM_IDf 1 - 14 -
    - PRBUBE_CONT PR_CONTf 1 - - -
    - PRBUBE_TIP_ID_B PR_TIP_ID_Bf 10 - 4 -
    - PRBUBE_NUM_ID_B PR_NUM_ID_Bf 10 - 14 -
    - PRBUBE_COD_NUA PR_COD_NUAf 10 - - -
    - PRBUBE_COD_FONDO PR_COD_FONDOf 10 - 3 -
    - PRBUBE_APE_PAT PR_APE_PATf 10 - 21 -
    - PRBUBE_APE_MAT PR_APE_MATf 10 - 21 -
    - PRBUBE_APE_T PR_APE_Tf 10 - 21 -
    - PRBUBE_P_NOMBRE PR_P_NOMBREf 10 - 21 -
    - PRBUBE_S_NOMBRE PR_S_NOMBREf 10 - 21 -
    - PRBUBE_FEC_NAC PR_FEC_NACf 10 - - -
    - PRBUBE_FEC_REG PR_FEC_REGf 10 - - -
    - PRBUBE_FEC_ACT PR_FEC_ACTf 10 - - -
    - PRBUBE_FEC_SYS PR_FEC_SYSf 10 - - -
    - PRBUBE_ERROR PR_ERRORf 1 - - -
    - PRBUBE_MENSAJE PR_MENSAJEf 1 - 61 -
    END
    This is my view32
    And my parameter call is
    BUSCA-LISTA-BENEFI SECTION.
    PERFORM INICIALIZA-PRBUBE (clean the view)
    PERFORM INICIALIZA-PREFML (clean the fml down attached code)
    PERFORM MOVER-A-PRBUBE
    PERFORM LLAMAR-PRBUSBEN
    PERFORM DO-TPTERM
    IF PRBUBE-ERROR = "N"
    INICIALIZA-PREFML SECTION.
    MOVE LENGTH OF PREFML-DATA-REC TO LEN
    MOVE LENGTH OF PREFML-DATA-REC TO FML-LENGTH
    CALL "FINIT32" USING PREFML-DATA-REC FML-REC.
    IF NOT FOK
    PERFORM DO-FML-ERROR
    MOVE "Falla INICIALIZACION DE PREFML" TO LOGMSG-TEXT
    PERFORM DO-USERLOG
    END-IF.
    LLAMAR-PRBUSBEN section.
    PERFORM CONV-PRBUBE-A-PREFML
    move LENGTH OF PREFML-DATA-REC TO LEN.
    move "FML32" to REC-TYPE.
    move SPACES to SUB-TYPE
    move "prbusben" to SERVICE-NAME.
    PERFORM LLAMAR-SERVICIO-PREFML
    PERFORM CONV-PREFML-A-PRBUBE
    IF PRBUBE-ERROR NOT = "N"
    MOVE "S" TO PRBUBE-ERROR
    INSPECT PRBUBE-MENSAJE REPLACING ALL WS-NULO BY SPACES
    IF PRBUBE-MENSAJE = SPACES
    MOVE "Servicio no disponible. Llamar a SINTESIS"
    TO PRBUBE-MENSAJE
    END-IF
    END-IF.
    LLAMAR-SERVICIO-PREFML SECTION.
    SET TPBLOCK TO TRUE.
    SET TPNOTRAN TO TRUE.
    SET TPNOTIME TO TRUE.
    SET TPSIGRSTRT TO TRUE.
    SET TPCHANGE TO TRUE.
    CALL "TPCALL" USING TPSVCDEF-REC
    TPTYPE-REC
    PREFML-DATA-REC
    TPTYPE-REC
    PREFML-DATA-REC
    TPSTATUS-REC.
    IF NOT TPOK
    PERFORM DO-FML-ERROR
    PERFORM DO-ERROR
    INITIALIZE LOGMSG-TEXT
    STRING "Falla en el servicio : " SERVICE-NAME
    DELIMITED BY SIZE INTO LOGMSG-TEXT
    PERFORM DO-USERLOG
    END-IF
    IF TPTRUNCATE
    INITIALIZE LOGMSG-TEXT
    STRING "Dato truncado en servicio : " SERVICE-NAME
    DELIMITED BY SIZE INTO LOGMSG-TEXT
    PERFORM DO-USERLOG
    END-IF.
    All my client call in the same code but it is run in linux 32bit, but in the s.o linux 64bit i have the problem.
    Javier

  • Buffer size while taking export of database

    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    Jibu

    Jibu  wrote:
    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    JibuIn addition to Sybrand's comments about alternatives, I'd like to add that just as a general class of problem, this is not the kind of thing I'd waste a lot of time on trying to come up with some magic optimal number. With exp and imp, I generally make the buffer about 10 times the default and forget it. This is the kind of thing where you very quickly reach a point of diminishing returns in terms of time spent "optimizing" the process vs. actual worthwhile gain in run time.
    By "worthwhile" gain, I mean this . . .
    In terms of a batch process like exp,
    -- is a 50% reduction in run time worthwhile when your starting point is a 1 hour run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 5 minute run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 30 second run time?
    -- how about if the run is scheduled for 2:00 am when there are no other processes and no customers on the database?

  • Redo log and buffer size

    Hi,
    i'm trying to size redolog and the buffer size in the best way.
    I already adjusted the size of the redo to let them switch 1/2 times per hour.
    The next step is to modify the redo buffer to avoid waits.
    Actually this query gives me 896 as result.
    SELECT NAME, VALUE
    FROM V$SYSSTAT
    WHERE NAME = 'redo buffer allocation retries';
    I suppose this should be near to 0.
    Log_buffer is setted to 1m.
    And i read "sizing the log buffer larger than 1M does not provide any performance benefit" so what can i do to reduce that wait time?
    Any ideas or suggestions?
    Thanks
    Acr

    ACR80,
    Every time you create a redo entry, you have to allocate space to copy it into the redo buffer. You've had 588 allocation retries in 46M entries. That's "close to zero"..
    redo entries 46,901,591
    redo buffer allocation retries 588The 1MB limit was based around the idea that a large buffer could allow a lot of log to accumulate between writes with the result that a process could execute a small transaction and commit - and have to wait a "long" time for the log writer to do a big write.
    If in doubt, check the two wait events:
    "log file sync"
    "log buffer space".
    As a guideline, you may reduce waits for "log buffer space" by increasing the size of the log buffer - but this introduces the risk of longer waits for "log file sync". Conversely, reducing the log buffer size may reduce the impact of "log file sync" waits but runs the risk of increasing "log buffer space" waits.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Apple TV 3 buffer size?

    I currently have an HTPC that I want to replace with the ATV3. I rip all my Blu-Ray and DVD disks and encode them in x264 using Handbrake. This keeps my toddler from destroying my media (especially since many of them are Disney movies) and keeps all our movies easily accessable.
    Since the ATV3 uses an A5 CPU, that means it is limited to L4.0 of x264, which means my files max bitrate can only be 25mb. I use variable bitrates which I currently have capped at 40mb (as to not exceed the source). I plan to change up my future encodes to limit the max bitrate to 25mb, but when you specify a max bitrate you need a buffer size as well.
    So the question becomes, what is the hardware buffer size for the new ATV3? I would assume it is more than 25mb, but I want to be sure before I start doing more encodes so I do not run into playback problems.

    Tree Dude wrote:
    I am not sure what I said that was against the T&Cs. Backing up disks I purchased is completely legal and a very common use for these types of devices.
    Apple Support Communities Terms of Use
    Specifically 2 below:
    Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise 'work around' digital rights management software or hardware. This includes conversations about 'ripping' DVDs or working around FairPlay software used on the iTunes Store.
    Do not post defamatory material.
    Your usage to any sane person constitutes 'fair use'.  Specific laws regarding this kind of thing vary from country to country, but Apple tend to frown on such discussions - their rules not ours.
    If you bend the rules, your posts may get deleted.  Trust me, been there, had posts deleted in the past.
    AC

  • Buffer Size - How low can you go

    I was wondering if you guys can exchange some information about your Buffer Size settings in Logic and how much milage you can get out of it.
    I upgraded to the new 8-core 2.8GHz MacPro a few weeks a go and thought I would live in 32Sample Buffer Size dreamland until the software developers come out with the next CPU hungry beast. But it looks like that a lot of the current Software Instruments bring down the new Intel Macs already to their knees.
    *This is my setup:*
    MacPro 2.8GHz 8-core, 12GB RAM, OSX 10.5.3, Logic 8.02, Fireface 800
    *This is my Problem:*
    If I'm looking at one channel of i.e. Sculpture, then all the 8 cores don't do me any good, because Logic can use only 1 core per channel strip. The additional cores come into place when I'm playing
    multiple tracks and Logic spread the CPU workload across those cores. So if I set the Buffer Size to the minimum of 32 Samples, then it comes down to one 2.8GHz processor and if he is powerful enough to process that one Software instrument without clicks and interruptions.
    Sculpture:
    Some of the patches are already so demanding that I reach the CPU limit when I play chords of four to eight notes with the 32 Sample Setting. If I add some "heavy" FX Plug-ins like amp modeling then I definitely reach the limit.
    _Trilogy and Atmosphere Wrapper:_
    Most of the time I have to increase the buffer size to 128 just to play a few notes. These "workaround wrapper plugins" are a plain joke from Spectrasonics and almost useless. There is plenty of discussion in various forums how they pizzzed off o lot of their customers with the way they handled their Intel transition regarding these two great plugins.
    *Audio Interface Considerations:*
    The different vendors of audio interfaces brag about their low latency of their devices. Especially Apogee's Symphony System was supposed to deliver extremely low latency. When they demoed their hardware at various Apple events, they played gazzillions of track and plug-ins and everything was running at 32 Buffer Size. I never saw however a demo where they loaded a gazzillions Sculpture instruments and showed that playing with 32 Buffer Size.
    *Here are my basic three questions:*
    1) Anybody experiencing already the same limitations with the new MacPros when it comes to intense Software Instruments
    2) Anybody uses the 3.2GHz machines and has better results, or is it just marginal.
    3) Anybody running the Symphony System and can throw any Software Instruments at it with 32 Buffer Size.
    +BTW, the OSX 10.5.3 update fixed the constant popping up of the "System Overload" window, but regarding the CPU load with Software Instruments, I don't see that much of an improvement.+

    My system is happy at 32 samples with the FF800... This is on the Jan 8 Core with 6 gig RAM, running X.5.3, Logic 8.0.2.
    Plugs include - NI Komplete5 bundle, Waves GOLD, BFD2 (thought this would stump it but it's fine with 2.0.4) Access Virus TI Snow.
    Safety IO buffer off and the Buffer set to small.

  • Program Buffer Size

    Dears,
    We have one demo system of netweaver 2004s on SQL server with Windows 2003 server.
    My RAM is 4 GB and paging memory is 16 GB.
    My extended memory and PHY_MEMSIZE is set to 3 GB.
    Initially my programme buffer size was 150 MB.As performance was not proper and swaps were there so changed my programme buffer size to 700 MB and then when I restarted my server,it was not coming up.Tried also with 600 MB but same issue then i tried 500 MB and then server started.
    Can I know its reason as i have sufficient memory availabe then what is the root cause of it and programme buffer memory depands on which memory.
    Please suggest.
    Shivam

    Dear Sunil,
    Output of 'sappfpar check pf=<instance profile>' is:
    Shared memory resource requirements estimated
    ================================================================
    Total Nr of shared segments required.....:         41
    Shared memory segment size required min..:  319488000 ( 304.7 MB)
    Swap space requirements estimated
    ================================================
    Shared memory....................: 1004.2 MB
    Processes........................:   88.5 MB
    Extended Memory .................:  512.0 MB
    Total, minimum requirement.......: 1604.7 MB
    Process local heaps, worst case..: 1907.3 MB
    Total, worst case requirement....: 3512.0 MB
    Errors detected..................:    0
    Warnings detected................:    4
    While on SAP level parameter ipc/shm* is not set.
    Please suggest me what should i do.
    Deepak

  • Linux Serial NI-VISA - Can the buffer size be changed from 4096?

    I am communicating with a serial device on Linux, using LV 7.0 and NI-VISA. About a year and a half ago I had asked customer support if it was possible to change the buffer size for serial communication. At that time I was using NI-VISA 3.0. In my program the VISA function for setting the buffer size would send back an error of 1073676424, and the buffer would always remain at 4096, no matter what value was input into the buffer size control. The answer to this problem was that the error code was just a warning, letting you know that you could not change the buffer size on a Linux machine, and 4096 bytes was the pre-set buffer size (unchangeable). According to the person who was helping me: "The reason that it doesn't work on those platforms (Linux, Solaris, Mac OSX) is that is it simply unavailable in the POSIX serial API that VISA uses on these operating systems."
    Now I have upgraded to NI-VISA 3.4 and I am asking the same question. I notice that an error code is no longer sent when I input different values for the buffer size. However, in my program, the bytes returned from the device max out at 4096, no matter what value I input into the buffer size control. So, has VISA changed, and it is now possible to change the buffer size, but I am setting it up wrong? Or, have the error codes changed, but it is still not possible to change the buffer size on a Linux machine with NI-VISA?
    Thanks,
    Sam

    The buffer size still can't be set, but it seems that we are no longer returning the warning. We'll see if we can get the warning back for the next version of VISA.
    Thanks,
    Josh

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • Sychronize AO/AI buffered data graph and measure data more than buffer size

    I am trying to measure the response time (around 1ms) of the pressure drop indicated by AI channel 0 when the AO channel 0 gives a negetive single pulse to the unit under test (valve). DAQ board: Keithley KPCI-3108, LabView Version: 6.1, OS system: Win2000 professional.
    My problem is I am getting different timed graph between the AI and AO channels every time I run my program, except the first time I can get real time graph. I tried to decrease the buffer size less than the max buffer size of the DAQ board (2048 samples), but it still does unreal time graph from AI channel, seems it was still reading from old data in the buffer when AO writes the new buffer data, that is my guessing. In my p
    rogram, the AO and AI part is seperated, AO Write buffer is in a while loop while AI read is not. Would that be a problem? Or it's something else?
    Also, I am trying to measure data much larger than the board buffer size limit. Is it possible to make the measurement by modifying the program?
    I really appreciate any of your help. Thank you very much!
    Best,
    Jenna

    Jenna,
    You can modify the X-axis of a chart/graph in LabVIEW to display real-time. I have included a link below to an example program that illustrates how to do this.
    If you are doing a finite, buffered acquisition make sure that you are always reading everything from the buffer for each run. In other words, If you set a buffer size of 5000, then make sure you are reading 5000 scans (set number of scans to read to 5000). This will assure you are reading new data every time you run you program. You could always put the AI Read VI within a loop and read a smaller number from the buffer until the buffer is empty (monitor the scan backlog output of the AI Read VI to see how many scans are left in the buffer).
    You can set a buffer size larger than the FIFO
    buffer of the hardware. The buffer size you set in LabVIEW is actually a software buffer size within your computer's memory. The data is acquired with the hardware, stored temporarily within the on-board FIFO, transferred to the software buffer, and then read in LabVIEW.
    Are you trying to create a TTL square wave with the analog output of the DAQ device? If so, the DAQ device has counters that can generate a highly accurate digital pulse as well. Just a suggestion. LabVIEW has a variety of shipping examples that are geared toward using counters (find examples>>DAQ>>counters). I hope this helps.
    Real-Time Chart Example
    http://venus.ni.com/stage/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E95556A4E034080020E74861&p_node=DZ52038&p_submitted=N&p_rank=&p_answer=&p_source=Internal
    Regards,
    Todd D.
    National Instruments
    Applications Engineer

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

Maybe you are looking for

  • Baffled by Video Cards

    I have a totally MSI System (except the monitor and TV) and am using Windows7 32-bit.  I was using an NVIDIA GEForce  8600 card which had two DVI ports.  The object was to plug one into my monitor and one into my TV.  I could NOT get both displays wi

  • My i-pod is not recognised by my computer ...my daughtr's ipod is

    my i-pod is not recognized

  • Error with Sql*Plus and Spatial

    When i create a new table with a column SDO_GEOMETRY (spatial), i insert a new field. When i do a simple select * from this table, i receive this error : SP2-0591: Unable to allocate dynamic space needed (37748736 bytes) I can't find any information

  • What is the Field name

    Dear friends, I want to add new column in Customer Line item i.e.in FBL5N,say I want to add payment amount in FBL5N.I knew the table name but from where I will get field name,Plz tell me the menu path. Thanks in Advance !!

  • Tracks listed under Audiobooks don't play what is listetd

    I get books from Audible. Since upgrading to iTunes 8 now my books list under Audiobooks, but when I go to play them, they don't play the track listed. For example if I have 5 tracks, 3 by Jodi Picoult, 2 by Grisham, 1 by Gary Chapman. The first list