FMS3 - Read buffer size

Hello,
I'm currently testing my custom file plugin, and I've noticed
that FMS uses rather small
buffer sizes when reading files (256K at most, sometimes only
reading 1 or 2 bytes at a time).
Is there a way to increase the buffer sizes? 1MB reads would
be much more efficient.
I've also noticed that every time a read takes place, the
server opens, reads and then promptly
closes the file after reading just one chunk, resulting in
thousands of open/read/close cycles
for a single session. (Very inefficient) Is there a reason
for that behavior?
Thanks,
Gilad

Of course I don't know why this happens, but given that what you expect as a message length seems not to be quite random, but rather some text in English, tells me that certain things are shifted in your buffers. Maybe the buffers are sometimes build incorrectly or read incorrectly. You see, the probability of assembling "icy-" out of 4 bytes is about 1 in 4 millions. I would assume this is a piece of text that is wrongly assumed to be the mesage length.

Similar Messages

  • HttpServer error in reading buffer size via keyboard input - HELP

    I've written a simple HttpServer program that reads keyboard input to construct a buffer to copy the requested file into the socket's output stream. I've done the string-to-integer conversion using BufferedReader and parse.Int. However, when I go to use the int later in the program, I keep getting the message "variable b may not have been initialized." Can anyone tell me what's missing from the code below? Thanks.
    private static void sendBytes(FileInputStream fis, OutputStream os)throws Exception
         //Construct a buffer via console input
         BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
         String str;
         int b;
              System.out.println("Enter desired buffer size or CTRL-C to break.");
         //Convert entry to an integer
         do
              str = br.readLine();
                 try
                   b = Integer.parseInt(str);
              catch(NumberFormatException e)
                   System.out.println("Invalid entry.");
         while((str=br.readLine())!=null);
         //Construct a buffer
         byte[] buffer = new byte;
         int bytes = 0;
         //Begin timing HTML page delivery
         long start, end;
              System.out.println("Timing for Web page delivery");
         start = System.currentTimeMillis();
         //Copy requested file into the socket's output stream
              while((bytes = fis.read(buffer)) != -1)
              os.write(buffer, 0, bytes);

    As the message suggests, what is missing is code to initialize the variable b. The first mention ("int b;") does not initialize it. The second mention ("b = Integer.parseInt(str);") only initializes it if no exception is thrown. So it's possible for b to be uninitialized when you actually try to use it.
    What do you need to change? First you need to decide what's to be done if the keyboard input isn't a valid integer. Do you have a default value in mind? If so, put that where you declare the variable ("int b = 42;"). If not, just initialize the variable to zero ("int b = 0;").

  • What is ths maximum PDO read buffer size using the Series 2 CAN cards?

    Does anyone know the maximum size for the PDO read buffer when using a Series 2 PCI NI-CAN card?

    Hi
    The maximum size for a single PDO does not depent on the series of your board it is depending on what else you are doing with the CANopen Library.
    The board uses a specific shared memory to transfer messages between driver and hardware. The size of this memory fits nearly 350 messages.
    The CANopen Config takes 100 messages for different services like NMT. That means the maximum size for a single PDO would be approx. 250 messages.
    Or for 5 different PDOs 50 each. But normaly you can leave the buffer size to zero, thus the PDO Read would allways read the newest data.
    This calculation is true for the board. That means you have 350 messages per board and 2 ports whould need to share the memory.
    DirkW

  • O/S read block size.

    Need to set few Oracle parameters based on solaris read buffer size.
    Experts out there can any one pass on the links and or documents as well how to check in the system?
    Knowing this would enable me to tune the system and improve the system I/O.
    My env is Solaris v5.10 ~ 64 bit system .

    I thought some one would give their valuable points!!!!!
    I could read it as 64KB in some documents?
    Is that right?
    Can this be configured?

  • How to read messages longer than network buffer size

    The logic of my application is:
    the client sends a request to the server and wait, in blocking mode, for its response.
    The server can responde with strings longer than 64KB (size of their sending and receiving buffer size), so under the hood, can also execs more than one socketChannel.write
    Nothing in the message says where it finish, nevertheless the client needs to assemble all in one big
    string.
    How can the client deal with this ? I'd like keep it as simple as possible (without using a selector)
    any thoughts ?
    thanks in advance

    Your above post suggests that it can send more than one packet (even ignoring the 64k limit.)
    In that case the data of the message must contain sufficient information. If not then the solution is not determinate.
    Ideally what you should received is a message and not just data. The message defines it contents. So you know how long it is and maybe even when it ends.
    Alternatively the data might contain something. For example if you are recieving well formatted XML then you can create a simple parser that just looks for the end tag. If it isn't well formatted, or at least you can not rely on that then it is much harder.

  • PLEASE can a AE from NI take a look at my problem. Sound input read behave in strange manner then the buffer size is larger than 2X number of samples to read.

    On my computer I have discovered some strange behavior then reading data from the sound card. Then the buffer size is 2x samples to read everything is as expected. But since I read the sound card 10 times pr second I feel a .2 second buffer is to small. I am using XP, and XP is not a RTOS so with a buffer set to 0.2 seconds I may lose data. Therefore I set the buffer size (number samples/ch on Sound Input Configure.vi) to be in range of 2 seconds. The result then is that then reading from Sound input.vi, a reading often take more than 0.1 second. On my computer it is often 500mSec. Then the next 5 read follows with almost zero interval. I do not loose data. But on my front panel the graphs looks like an very early silent movie. This error was introduced in Labview 8.x. To be honest I think the labview 7.x sound system was much better in many ways.
    But before I point any finger NI. Other people has to verify the behavior I experience. I have made an example showing this error. It is a modified version  of the "Continuous Sound Input.vi" example. Then the "buffer in seconds" control is set to 0.2 every thing works OK. Changer this to a larger number will produce the mentioned above hiccup. The larger number in this control the larger hiccup. Is it any way to fix this? My solution up to now has to use a free 3. part software(http://www.zeitnitz.de/Christian/index.php?sel=wav​eio) But I guess it soon will be outdated. It may not work with newer windows versions.
    Any help at all will be appreciated 
    And yes I have the most updated version fo DirectX. Also I se this in Labview 2009 which I have trail version of. The VI I have made is in 8.6
    Message Edited by Coq Rouge on 09-07-2009 10:54 AM
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Continuous Sound Input with timing.vi ‏23 KB

    macaba wrote:
    If you take a moving average of the 0.2s buffer vs. 3s buffer at an update rate of 10, then they are the same (just under 100ms), so the average refresh rate is the same. I agree that is odd behaviour that the time between sound reads go to zero quite a lot then take a long time once in a while (presumably to fill the buffer
    I guess it goes to zero because it is reading data from the buffer it do not has to wait for data from the sound card. The mysterious thing is the periodic delay. You are also correct then saying that average timing is correct. And in my application I have no data loss.
    If you search for sound in this forum you will find out that many people has reported trouble with the sound system.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Default buffer size to read/write LOBs

    What is the default buffer size used by the iFS APIs when reading/writing LOBs?

    What is the default buffer size used by the iFS APIs when
    reading/writing LOBs?
    64K

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • Sychronize AO/AI buffered data graph and measure data more than buffer size

    I am trying to measure the response time (around 1ms) of the pressure drop indicated by AI channel 0 when the AO channel 0 gives a negetive single pulse to the unit under test (valve). DAQ board: Keithley KPCI-3108, LabView Version: 6.1, OS system: Win2000 professional.
    My problem is I am getting different timed graph between the AI and AO channels every time I run my program, except the first time I can get real time graph. I tried to decrease the buffer size less than the max buffer size of the DAQ board (2048 samples), but it still does unreal time graph from AI channel, seems it was still reading from old data in the buffer when AO writes the new buffer data, that is my guessing. In my p
    rogram, the AO and AI part is seperated, AO Write buffer is in a while loop while AI read is not. Would that be a problem? Or it's something else?
    Also, I am trying to measure data much larger than the board buffer size limit. Is it possible to make the measurement by modifying the program?
    I really appreciate any of your help. Thank you very much!
    Best,
    Jenna

    Jenna,
    You can modify the X-axis of a chart/graph in LabVIEW to display real-time. I have included a link below to an example program that illustrates how to do this.
    If you are doing a finite, buffered acquisition make sure that you are always reading everything from the buffer for each run. In other words, If you set a buffer size of 5000, then make sure you are reading 5000 scans (set number of scans to read to 5000). This will assure you are reading new data every time you run you program. You could always put the AI Read VI within a loop and read a smaller number from the buffer until the buffer is empty (monitor the scan backlog output of the AI Read VI to see how many scans are left in the buffer).
    You can set a buffer size larger than the FIFO
    buffer of the hardware. The buffer size you set in LabVIEW is actually a software buffer size within your computer's memory. The data is acquired with the hardware, stored temporarily within the on-board FIFO, transferred to the software buffer, and then read in LabVIEW.
    Are you trying to create a TTL square wave with the analog output of the DAQ device? If so, the DAQ device has counters that can generate a highly accurate digital pulse as well. Just a suggestion. LabVIEW has a variety of shipping examples that are geared toward using counters (find examples>>DAQ>>counters). I hope this helps.
    Real-Time Chart Example
    http://venus.ni.com/stage/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E95556A4E034080020E74861&p_node=DZ52038&p_submitted=N&p_rank=&p_answer=&p_source=Internal
    Regards,
    Todd D.
    National Instruments
    Applications Engineer

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • Where can I change the buffer size for LKM File to Oracle (EXTRENAL TABLE)?

    Hi all,
    I'd a problem on the buffer size the "LKM File to Oracle (EXTRENAL TABLE)" as follow:
    2801 : 72000 : java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04020: found record longer than buffer size supported, 524288, in D:\OraHome_1\oracledi\demo\file\PARTIAL_SHPT_FIXED_NHF.dat
    Do you know where can I change the buffer size?
    Remarks: The size of the file is ~2Mb.
    Tao

    Hi,
    The behavior is explained in Bug 4304609 .
    You will encounter ORA-29400 & KUP-04020 errors if the RECORDSIZE clause in the access parameters for the ORACLE_LOADER access driver is larger than 10MB and you are loading records larger than 10MB. Which means their is a another limitation on read size of a record which is termed as granule size. If the default granule size is less then RECORDSIZE it limits the size of the read buffer to granule size.
    Use the pxxtgranule_size parameter to change the size of the granule to a number larger than the size specified for the read buffer.You can use below query to determine the current size of the granule.
    SELECT KSPFTCTXPN PARAMETER_NUMBER,
    KSPPINM PARAMETER_NAME,
    KSPPITY PARAMETER_TYPE,
    KSPFTCTXVL PARAMETER_VALUE,
    KSPFTCTXDF IS_DEFAULT,
    KSPPIFLG MODIFICATION_FLAG,
    KSPFTCTXVF VALUE_FLAG
    FROM X$KSPPI X, X$KSPPCV2 Y
    WHERE (X.INDX+1) = KSPFTCTXPN AND
    KSPPINM LIKE '%_px_xtgranule_size%';
    There is no 'ideal' or recommended value for pxxtgranule_size parameter, it is safe to increase it to work around this particular problem. You can set this parameter using ALTER SESSION/SYSTEM command.
    SQL> alter system set "_px_xtgranule_size"=10000;
    Thanks,
    Sutirtha

  • How do you determine ip and op buffer size on a 3550-12G

    I have a Cisco 3550-12G switch and I want to check to see if the input buffers and the output buffers for port gi0/12 are the same size. Is there a simple way to do this, I tried using the show buffers command but I couldn't seem to find what I was looking for. Help!

    Hi,
    "The 3550 switch uses central buffering. This means that there are no fixed buffer sizes per port. However, there is a fixed number of packets on a Gigabit port that can be queued. This fixed number is 4096. By default, each queue in a Gigabit port can have up to 1024 packets, regardless of the packet size."
    http://www.cisco.com/warp/public/473/187.html#topic7
    HTH,
    Bobby
    *Please rate helpful posts.

  • DBMS_LOB.WRITEAPPEND Max buffer size exceeded

    Hello,
    I'm following this guide to create an index using Oracle Text:
    http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/cdatadic.htm#i1006810
    So I wrote something like this:
    CREATE OR REPLACE PROCEDURE CREATE_INDEX(rid IN ROWID, tlob IN OUT NOCOPY CLOB)
    IS
    BEGIN
         DBMS_LOB.CREATETEMPORARY(tlob, TRUE);
         FOR c1 IN (SELECT ID_DOCUMENT FROM DOCUMENT WHERE rowid = rid)
         LOOP
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT_TITLE>'), '<DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c1.TITLE, ' ')), NVL(c1.TITLE, ' '));
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT_TITLE>'), '</DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
              FOR c2 IN (SELECT TITRE,TEXTE FROM PAGE WHERE ID_DOCUMENT = c1.ID_DOCUMENT)
              LOOP
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE>'), '<PAGE>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c2.TEXTE, ' ')), NVL(c2.TEXTE, ' '));
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE>'), '</PAGE>');
              END LOOP;
         END LOOP;
    END;
    Issue is that some page text are bigger than 32767 bytes ! So I've got an INVALID_ARGVAL...
    I can't figure out how can I increase this buffer size and how to manage this issue ??
    Can you please help me :)
    Thank you,
    Ben
    Edited by: user10900283 on 9 févr. 2009 00:05

    Hi ben,
    I'm afraid, that doesn't help much, since you have obviously rewritten your procedure based on the advise given here.
    Coluld you please post your new procedure, as formatted SQL*Plus, embedded in {noformat}{noformat} tags, like this:SQL> CREATE OR REPLACE PROCEDURE create_index(rid IN ROWID,
    2 IS
    3 BEGIN
    4 dbms_lob.createtemporary(tlob, TRUE);
    5
    6 FOR c1 IN (SELECT id_document
    7 FROM document
    8 WHERE ROWID = rid)
    9 LOOP
    10 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
    11 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT_TITLE>')
    12 ,'<DOCUMENT_TITLE>');
    13 dbms_lob.writeappend(tlob, LENGTH(nvl(c1.title, ' '))
    14 ,nvl(c1.title, ' '));
    15 dbms_lob.writeappend(tlob
    16 ,LENGTH('</DOCUMENT_TITLE>')
    17 ,'</DOCUMENT_TITLE>');
    18 dbms_lob.writeappend(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
    19
    20 FOR c2 IN (SELECT titre, texte
    21 FROM page
    22 WHERE id_document = c1.id_document)
    23 LOOP
    24 dbms_lob.writeappend(tlob, LENGTH('<PAGE>'), '<PAGE>');
    25 dbms_lob.writeappend(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
    26 dbms_lob.writeappend(tlob
    27 ,LENGTH(nvl(c2.texte, ' '))
    28 ,nvl(c2.texte, ' '));
    29 dbms_lob.writeappend(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>')
    30 dbms_lob.writeappend(tlob, LENGTH('</PAGE>'), '</PAGE>');
    31 END LOOP;
    32 END LOOP;
    33 END;
    34 /
    Advarsel: Procedure er oprettet med kompileringsfejl.
    SQL>
    SQL> DECLARE
    2 rid ROWID;
    3 tlob CLOB;
    4 BEGIN
    5 rid := 'AAAy1wAAbAAANwsABZ';
    6 tlob := NULL;
    7 create_index(rid => rid, tlob => tlob);
    8 dbms_output.put_line('TLOB = ' || tlob); -- Not sure, you can do this?
    9 END;
    10 /
    create_index(rid => rid, tlob => tlob);
    FEJL i linie 7:
    ORA-06550: line 7, column 4:
    PLS-00905: object BRUGER.CREATE_INDEX is invalid
    ORA-06550: line 7, column 4:
    PL/SQL: Statement ignored
    SQL>

  • How to set optimum buffer size when burning CDs?

    There is a drop-down menu in the CD burning dialogue box with a choice of six buffer sizes.
    How does one determine the optimum size? Is this size related to the burning speed?
    Please help.
    Thanks,
    cobaltgreen

    Having trouble with dvd players not liking burned dvds.
    Will track down Tom Woskly and see if he has ideas...

  • How to set local file copy buffer size?

    Is there any sysctl parameter or any other mechanism to set or change file copy buffer sizes? I'm backing up a huge number of files to a local hard drive connected by firewire, and I'd like to play with file copy buffer sizes for the best performance. The machine used is a new macbook pro running OS-X 10.6.7. Any ideas?

    Here is a bash script that uses the dd command to copy a file.  Use the -b option to set the file size.
    example:
    /Users/mac/config/forumcopy.command -vb  4096  /Applications\ \(Mac\ OS\ 9\)/Civilization\ II/Civ\ II\ Map\ Editor  /Applications\ \(Mac\ OS\ 9\)/Civilization\ II/Civ\ II\ Map\ Editorv8
    I haven't tested this a lot. 
    Of course, I haven't figured out how best to post code.  Trying HTML mode. Using the <pre> tag.
    #!/bin/bash
    # macteracopy [ -b blocksize ] [-v ] [-V ] input_file output_file
    # Purpose of this script:
    # Copy a file with optional block size. Default size of 4096.
    # Notes:
    # chmod u+x macteracopy
    # You may have to restart the finder to notice a customized file icon.
    #   Copyright 2010 rccharles
    #   GNU General Public License
    #   This program is free software: you can redistribute it and/or modify
    #   it under the terms of the GNU General Public License as published by
    #   the Free Software Foundation,  version 3
    #   This program is distributed in the hope that it will be useful,
    #   but WITHOUT ANY WARRANTY; without even the implied warranty of
    #   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    #   GNU General Public License for more details.
    #   For a copy of the GNU General Public License see
    #   <http://www.gnu.org/licenses/>.
    # debug info
    export PS4='+(${BASH_SOURCE}:${LINENO}):'
    ## not in the tiger version of bash ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
    declare -x   verbose="No"   \
             veryVerbose="No"
    blockSize=4096
    # Check for command-line options used when calling the script
    if [ $# -gt 0 ] ; then
       while getopts "b:vV" Option ; do
          case "${Option}" in
          b  )  blockSize=$OPTARG
          v ) verbose="Yes"
          V ) veryVerbose="Yes"
          \? ) echo 'usage macteracopy -b blocksize -V input_file output_file'
          * ) echo "Unknown argument among arguments $* on command line."
             exit 6
          esac
       done
    fi
    # We're done with switches / options from the command line
    shift $(($OPTIND - 1))
    [ "${veryVerbose}" = "Yes" ] \
       && set -x  
    inputFile="${1}"
    outputFile="${2}"
    [ "${verbose}" = "Yes" ] \
       && echo \
       && echo "$0 script revised $(GetFileInfo -m $0)" \
       && echo
    [ ! -f "${inputFile}" ] && echo "File not found.  ${inputFile}" && exit 4
    [ -d "${inputFile}" ] \
       && echo "Directories are not supported, yet.  ${inputFile}" \
       && exit 4
    [ "${veryVerbose}" = "Yes" ] \
       && ulimit -a \
       && df \
       && echo
    if [ "${verbose}" = "Yes" ] ; then
       dd bs=${blockSize} if="${inputFile}" of="${outputFile}"
         dd bs=${blockSize} if="${inputFile}/rsrc" of="${outputFile}/rsrc" 
    else
          dd bs=${blockSize} if="${inputFile}" of="${outputFile}"
          dd bs=${blockSize} if="${inputFile}/rsrc" of="${outputFile}/rsrc"
        } 2>/dev/null
    fi
    [ "${verbose}" = "Yes" ] \
       && echo \
       && ls -l "${inputFile}" \
       && ls -l "${inputFile}"/rsrc \
       && GetFileInfo  "${inputFile}" \
       && ls -l "${outputFile}" \
       && ls -l "${outputFile}"/rsrc \
       && GetFileInfo  "${outputFile}" \
       && echo
    SetFile  -a $(GetFileInfo  -a "${inputFile}"  ) \
           -c $(GetFileInfo  -c "${inputFile}" | sed 's/"// g' ) \
             -t $(GetFileInfo  -t "${inputFile}" | sed 's/"// g' ) \
             "${outputFile}"
    [ "${verbose}" = "Yes" ] \
       && echo "after SetFile" \
       && GetFileInfo  "${outputFile}"

Maybe you are looking for

  • Windows 7 64 bit drivers for HP ENVY 17-J070CA

    Hi,  Can anyone please help  me with  the drivers Windows 7 (64 bit) for my new Hp Envy 17-J070CA laptop. i would really appreciate. thank you !!! This question was solved. View Solution.

  • How to import flat file as a source in DAC Client

    Dear All, I am doing some R&D,I have a problem that how can i import the flat file in DAC client which i am using as a source. Can any one please explain in detail step wise. Regards Tarang Jain

  • ANNC: Visit Adobe's Exhibit Booth at WritersUA

    Rick Stone (Captiv8r and RoboWizard), Peter Grainge, John Daigle, Matt Sullivan and other forum posters will be at the Adobe Exhibit at WritersUA in Seattle. This is a great chance to learn about the entire Adobe Technical Communication Suite (Captiv

  • Availibity Confirmation

    Hi All , We have process as we create a Factory order (ZFAC) and customer Order (ZCUS)for configurable material (Two different Sales Order) for a legacy Order . Factory Order goes to factory and follows below process PRu2014PO --- Inbound Delivery u2

  • Org management reporting

    Hi All, What can be the reporting or approval issues(considering workflow also) at root org unit which is without having a company code? Regards, RK.