JTextArea displaying large files

Simple, much talked about question.
I searched for an answer, but was not satisfied.
I am trying to display larger files in a JTextArea, say 20-30MB each.
What is the fastest way to read, and then to write in into the JTextArea.
Currently I am using a BufferedReader, and readline, and append about 3000 lines to a StringBuffer, and then append every 300 lines to the text area unitl EOF.
It takes the files about 60-70 secs to load on a 1ghz 512MB Win machine.
Which is a long times, especially because requirements have us on some 8 year Sun boxes.
Questions are........fastest way to read and display.
I thought about overiding the text area's document for random access, but would hope for a simplier answer since this requirement is not high on the users list.
Am hoping that someone can just post an example since they solved this problem a long time ago.

I have done it!
The only problem with using a Reader is that you load everything into memory...so if you have a file larger than your -Xmx you are screwed.
I used a combination of a JTextArea, JScrollBar and a RandomAccessFile. First I disabled the verticle scrollbar in the JTextArea. I then placed my JScrollBar next to it so it looks like it's the JTextArea's. My JScrollBars min and max values are 0 and RandomAccessFile.length()... Thus making the scrollbar byte based. I then set the scroll increment to the average number of bytes per line. And finally add a listener to the scrollbar that listens for changes... Essentially I only read a small portion of the file, and when you scroll, it reads the next part, rounding to the nearest endline. When you scroll backwards, it reads backwards...unlike a Reader. There are many things that need to be accounted for since it is not line-based but byte-based, or if the filesize changes. For my needs, it does exactly what I need it to, reads large files quickly. Oh I also disabled word wrap. I'm sure you could think of some way to incorperate this idea to your needs.

Similar Messages

  • Need your suggestions - how to display large file in ASCII and HEX

    Hello,
    I want to create an application which can read in a large file and switch between displaying ASCII and HEX (formatted a particular way). There are two problems here that I'm not quite sure how to solve.
    1. How to switch dynamically between ASCII and HEX. Should the HEX formatter be in the document of the JTextArea(or equivalent), or somewhere in the view? If it's in the view then where? I'd rather not read in the file more than once.
    2. How to do some kind of paging scheme for huge files. I'd like to read in part of the file and display it, then when the user scrolls to another area, read in that part of the file.
    Thanks!
    Jeff

    Hello,
    I want to create an application which can read in a
    large file and switch between displaying ASCII and
    HEX (formatted a particular way). There are two
    problems here that I'm not quite sure how to solve.
    1. How to switch dynamically between ASCII and HEX.
    Should the HEX formatter be in the document of the
    e JTextArea(or equivalent), or somewhere in the view?
    If it's in the view then where? I'd rather not read
    d in the file more than once.You can iterate over all the characters in the String using String.charAt, cast the chars to ints, and call Integer.toHexValue(...)
    >
    2. How to do some kind of paging scheme for huge
    files. I'd like to read in part of the file and
    display it, then when the user scrolls to another
    area, read in that part of the file.
    Thanks!
    Jeff

  • How to display a large file in JTextArea

    i am displaying a large file(a multi MB) in a JTextArea, it is showing java.lang.outofmemoryerror
    are there any solutions for this problem or is there any better way other than JTextArea ???

    thanks for replying but its all abt Tables
    i am new to java
    i asked how to display a large Text file in a JTextAreathanx
    --santosh                                                                                                                                                                                                                                                                                                   

  • Can't display PDF file larger than 2Mb

    Would like to ask if someone knows that any setting that would allow the PDF file larger than 2Mb be displayed within web browser IE 7, for some strange reasons, the PDF file will stop download at arround 800 some Kb, and I waited for almost 15 minutes with no further action.
    Back ground info:- MS Vista Home Premium SP1, 2G RAM, over 60G available HDD space, IE Temp file size set to 100M
    Appreciated for any help
    -David

    Mike I have a similar problem with down loading PDF files larger that 3MB. I have three computers with Win XP, IE7 and 56K connection. I have tried to down load three different PDF files from the same source - 1.3MB 2.3MB and 5.3MB. The Computer with Reader 7.0 downloads and opens all files. The computers with Reader 8.1.2 and 9.0 downloads and opens the smaller files but stop at 3.2 MB on the larger file with an error message "The file is damaged and could not be repaired".
    As fixes to the problem I have tried turning off any accelerators, installing and using Monzilla foxfire (then removed it since I had the same problem), increasing the cash size from 250MB to 1GB, Saving the file to the desktop rather than opening the file. I have not found useful help in Adobe Reader knowledge base.
    Any suggestions?
    Carl

  • Large files not displaying clearly

    I have just stitched some photos and the resulting files do not display clearly outside of CS4.  They were files - up to 800MB  I have then resized them and put them on the web and found the files resized from the larger files do not give a good resolution on the screen.   I have tried saving them up to 150 ppi instead of 75 but still get the same result.   The original large file has incredible resolution when zooming in with CS4.  But terrible when trying to resize  put it on the web.   For an example check out the gallery.      http://gallery.me.com/suethomo#100207&bgcolor=white
    The stitched ones are Thomo_1. Thomo_2  Thomo_3 Thomo_4.    The sharpest ones are taken with a point and shoot!  The stitched pics are with Canon 5D Mk!! and processed from RAW    I have sharpened them in CS4 and they look good until I view them in Bridge or elsewhere.  Not sure where I am going wrong.
    Any suggestions welcome

    A couple of things come to mind...
    I noticed that your web gallery scales the images according to the viewer's browser window size.  Shrink the browser window and the image shrinks.  Enlarge the browser window and the image enlarges.  I would suspect that scaling the image beyond the actual image size would cause pixelation and possible aliasing.
    I would also check that you are sizing your images properly for online viewing.  The resolution should likely be 72 ppi at a suitable viewing size.  I would also ensure that you are using 'Bicubic Sharper' when scaling images down in PS.

  • Testing iMac in store I checked iPhoto editing. Photo said to be 42Mb but zooming caused immediate pixellation, so clearly was not displaying a 42Mb file. When I zoom in PS on PC a large file retains detail for a lot of zooming. Whats going on?

    While testing a MacBook pro in the Apple Store, I checked iPhoto. Looking at what was said to be  42Mb file I noticed that zooming caused immediate pixellation, so clearly a 42Mb RAW file was not what I was looking at. In my PS Elements on my PC when I zoom, the photo remains unpixellated until quite a lot of zoom is used (marked as 100%). This depends on the file size of the photo of course. Maybe the displayed photo in iPhoto is a low res. jpeg. This is not too good for me. I wanted to see how the base model of MacBook would handle photos esp large files and I need to zoom in. Will the base model cope with this in PSE for Mac?

    iPhoto will happily zoom to 300%. The quality at that magnification will depend on the quality of the image.  But if it's a sharp image - of any size - it will zoom to 300% with no problems.
    PSE for Mac has nothing to do with iPhoto. I have used it with no problems, but for best information why not ask over at the Adobe forums?

  • Need help using JTextArea to read large file

    Hi here is the deal.
    I've got a large file (about 12Mbytes) of raw data (all of the numbers are basically doubles or ints) which was formed with the ObjectOutputStream.writeInt/writeDouble (I say this to make clear the file has no ascii whatsoever).
    Now I do the file reading on a SwingWorker thread where I read the info from the file in the same order I put it originally.
    I need to convert it to string and visualize it on a JTextArea. It starts working. However a one point (56% to be exact since I know exactly the number of values I need to read) it stops working. The program doesn�t freeze (probably because the other worker thread froze) and I get no exceptions (even tough I�m catching them) and no errors.
    Does anyone have any idea of what the problem could be?
    Thank you very much in advance.
    PD: I don�t know if it matters but I'm using ObjectInputStream with the readInt/readDouble functions to get the values and then turning them to strings and adding them to the JTextArea.

    I can put up the code.
    I don't have it with me right now but I'll do it later.
    Thank you.
    Second. I need to debug a function aproximation that uses a method that has to manage that many numbers. If I don�t put it into a txt and read it there is no way I will know where the problems are, if any. And yes I can look at the txt and figure out problems. It's not that hard.
    What I'll try to do is to write directly to regular txt file instead of doing it to the JTextArea.
    Thank you for your help and I'll post back with the code and results.
    PD: I don't know what profiling is, would you mind telling me?

  • Again: display large XML fil;e?

    Dear all,
    Any ideas to display large XML file into a JTree? This has been asked before, but still does not have any solution yet.
    I am looking for your kind help. Please help show me some codes, if you know.
    Many thanks!

    Any (close) examples for it? i've not got any - sorry
    btw, how large is large? does this mean you can't afford to store the whole XML in memory at the same time, or simply that the gui is unresponsive/resource hungry?
    asjf

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • Copying large file sets to external drives hangs copy process

    Hi all,
    Goal: to move large media file libraries for iTunes, iPhoto, and iMovie to external drives. Will move this drive as a media drive for a new iMac 2013. I am attempting to consolidate many old drives over the years and consolidate to newer and larger drives.
    Hardware: moving from a Mac Pro 2010 to variety of USB and other drives for use with a 2013 iMac.  The example below is from the boot drive of the Mac Pro. Today, the target drive was a 3 TB Seagate GoFlex ? USB 3 drive formatted as HFS+ Journaled. All drives are this format. I was using the Seagate drive on both the MacPro USB 2 and the iMac USB 3. I also use a NitroAV Firewire and USB hub to connect 3-4 USB and FW drives to the Mac Pro.
    OS: Mac OS X 10.9.1 on Mac Pro 2010
    Problem: Today--trying to copy large file sets such as iTunes, iPhoto libs, iMovie events from internal Mac drives to external drive(s) will hang the copy process (forever). This seems to mostly happen with very large batches of files: for example, an entire folder of iMovie events, the iTunes library; the iPhoto library. Symptom is that the process starts and then hangs at a variety of different points, never completing the copy. Requires a force quit of Finder and then a hard power reboot of the Mac. Recent examples today were (a) a hang at 3 Gb for a 72 Gb iTunes file; (b) hang at 13 Gb for same 72 Gb iTunes file; (c) hang at 61 Gb for a 290 Gb iPhoto file. In the past, I have had similar drive-copying issues from a variety of USB 2, USB 3 and FW drives (old and new) mostly on the Mac Pro 2010. The libraries and programs seem to run fine with no errors. Small folder copying is rarely an issue. Drives are not making weird noises. Drives were checked for permissions and repairs. Early trip to Genius Bar did not find any hardware issues on the internal drives.
    I seem to get these "dropoff" of hard drives unmounting themselves and other drive-copy hangs more often than I should. These drives seem to be ok much of the time but they do drop off here and there.
    Attempted solutions today: (1) Turned off all networking on Mac -- Ethernet and WiFi. This appeared to work and allowed the 72 Gb iTunes file to fully copy without an issue. However, on the next several attempts to copy the iPhoto and the hangs returned (at 16 and then 61 Gb) with no additional workarounds. (2) Restart changes the amount of copying per instance but still hangs. (3) Last line of a crash report said "Thunderbolt" but the Mac Pro had no Thunderbolt or Mini Display Port. I did format the Seagate drive on the new iMac that has Thunderbolt. ???
    Related threads were slightly different. Any thoughts or solutions would be appreciated. Better copy software than Apple's Finder? I want the new Mac to be clean and thus did not do data migration. Should I do that only for the iPhoto library? I'm stumped.
    It seems like more and more people will need to large media file sets to external drives as they load up more and more iPhone movies (my thing) and buy new Macs with smaller Flash storage. Why can't the copy process just "skip" the parts of the thing it can't copy and continue the process? Put an X on the photos/movies that didn't make it?
    Thanks -- John

    I'm having a similar problem.  I'm using a MacBook Pro 2012 with a 500GB SSD as the main drive, 1TB internal drive (removed the optical drive), and also tried running from a Sandisk Ultra 64GB Micro SDXC card with the beta version of Mavericks.
    I have a HUGE 1TB Final Cut Pro library that I need to get off my LaCie Thunderbolt drive and moved to a 3TB WD USB 3.0 drive.  Every time I've tried to copy it the process would hang at some point, roughly 20% of the way through, then my MacBook would eventually restart on its own.  No luck getting the file copied.  Now I'm trying to create a disk image using disk utility to get the file from the Thunderbolt drive and saved to the 3TB WD drive. It's been running for half an hour so far and appears that it could take as long a 5 hours to complete.
    Doing the copy via disk image was a shot in the dark and I'm not sure how well it will work if I need to actually use the files again. I'll post my results after I see what's happened.

  • Copying hangs when copying large files from DVD

    I'm having trouble copying large files (>2GB) from DVDs to my built-in hard drive. When I drag the file icon from the DVD window to the desktop, the progress bar pops up almost immediately, and says "0 KB of 2.3 GB copied" and "Estimated time left: 1 minute" -- and then it just sits there, doing nothing. In some cases, after five minutes or so of inactivity, the copy finally begins... but most of the time it never does. Force-Quit doesn't help; the only way out is to do an emergency restart and try again.
    This problem began when I upgraded to Tiger, and several members of my office group have been experiencing the same problem on their own computers as well.
    I've poked around the forum looking for answers but the only thing I could find that looked related was a conflict between iSight and some third-party external hard drives... but none of us have iSight, and none of us are using third-party external drives, so clearly that's not the issue we'rehaving.
    Any help out there?
    iBook G4   Mac OS X (10.4.6)  

    Michael, I occasionally notice anomolous behaviour when files get dragged between volumes. Mostly the copying window shows an accurate and immediate indication of what's going on, but other times it just looks like nothing's happening, the display never changes, yet, it is in fact copying the files over. Now, if you're copying 2gb files from DVD to HD, it's gonna take some time, more than 5 minutes, so I'm wondering how long you've left it before forcing it to quit?

  • How can I get the Organizer in Elements 12to display the file name and date of the thumbnail images?

    The View menu option to display the File Name is grayed out and thus I cannot display the file name.

    Hi,
    You need to go to View menu and check Details as well.
    You may even have to make the thumbnails larger by using the zoom slider at the bottom
    Good luck
    Brian

  • Flash media server taking forever to load large files

    We purchased FMIS and we are encoding large 15+ hour MP4 recordings using flash media encoder. When opening these large files for playback, which have not been opened recently  the player displays the loading indicator for up to 4 minutes! Once it has apparently been cached on the server it opens immediately from any browser even after clearing local browser cache. So a few questions for the experts
    1. Why is it taking so long to load the file. Is it because the MP4 metadata is in the wrong format and the file is so huge? I read somewhere that Media Encoder records with incorrect MP4 metadata is that still the case?
    2. Once its cached on the server, exactly how much of it is cached. Some of these files are larger than 500mb.
    3. What fms settings do you suggest I change. FMIS is running on windows server R2 64 bit, but FMIS itself is 32 bit. We have not upgraded to the 64 bit version. We have 8GB of ram. Is it OK to set FMS cache to 3GB. And would that only have enough room for 3-4 large files, because we have hundreds of them.
    best,
    Tuviah
    Lead programmer, solid state logic inc

    Hi Tuviah,
    You may want to email me offline about more questions here as it can get a little specific but I'll hit the general problems here.
    MP4 is a fine format, and I won't speak ill of it, but it does have weaknesses.  In FMS implementation those weaknesses tend to manifest around the combination of recording and very large files, so some of these things are a known issue.
    The problem is that MP4 recording is achieved through what's called MP4 fragmentation.  It's a part of the MP4 spec that not every vendor supports, but has a very particular purpose, namely the ability to continually grow an MP4 style file efficiently.  Without fragments one has the problem that a large file must be constantly rewritten as a whole for updating the MOOV box (index of files) - fragments allow simple appending.  In other words it's tricky to make mp4 recording scalable (like for a server ) and still have the basic MP4 format - so fragments.
    There's a tradeoff to this however, in that the index of the file is broken up over the whole file.  Also likely these large files are tucked away on a NAS for you or something similar.  Normal as you likely can't store all of them locally.  However that has the bad combo of needing to index the file (touching parts of the whole thing) and doing network reads to do it.  This is likely the cause of the long delay you're facing - here are some things you can do to help.
    1. Post process the F4V/MP4 files into non fragmented format - this may help significantly in load time, though it could still be considered slow it should increase in speed.  Cheap to try it out on a few files. (F4V and MP4 are the same thing for this purpose - so don't worry about the tool naming)
    http://www.adobe.com/products/flashmediaserver/tool_downloads/
    2. Alternatively this is why we created the raw: format.  For long recording mp4 is just unideal and raw format solves many of the problems involved in doing this kind of recording.  Check it out
    http://help.adobe.com/en_US/flashmediaserver/devguide/WSecdb3a64785bec8751534fae12a16ad027 7-8000.html
    3. You may also want to check out FMS HTTP Dynamic Streaming - it also solves this problem, along with others like content protection and DVR and it's our most recent offering in tech, so it has a lot of strengths the other areas don't.
    http://www.adobe.com/products/httpdynamicstreaming/
    Hope that helps,
    Asa

  • Getting large file sizes in AppleScript...

    For starters I am new to AppleScript. Please excuse my lack of knowledge.
    I am trying to get file sizes for large files and kicking that out to a text file. Problem is that all these files are a gigabyte and up. When I use:
    set fileSIZE to size of (info for chosenFile) as string
    --result is 1.709683486E+9
    I also tried using:
    tell application "Finder"
    set theSize to physical size of chosenFile
    end tell
    --result is 1.724383232E+9
    So, my question is there another way that shows me the bytes in this format "1,709,683,486" bytes.
    The "size of (info for chosenFile)" shows the right numbers "1.709683486". I don't really need the commas. I had a larger file of 44GB and the result was 4.4380597717E+10.
    How do I remove the ".", the "E+9" or "E+10"?

    looks like hubionmac has found a winner.
    perhaps not:
    page 89 applescript language guide:
    the largest integer value is 536,870,911. Larger integers are convert to real numbers.
    Notice the different request for size report different values.
    Notice that ls -l return the data fork size in Tiger.
    Notice the clever way of working with the resource fork in unix ( /rsrc ). I found this in juliejuliejulie's code in another post.
    set theFile to (choose file)
    set fileSIZE to size of (info for theFile) as miles as string
    log "fileSIZE = " & fileSIZE
    tell application "Finder"
       set pSize to physical size of theFile
    end tell
    log "pSize = " & pSize
    set stringSize to pSize as miles as string
    log "stringSize = " & stringSize
    set theItem to quoted form of POSIX path of (theFile)
    log "theItem = " & theItem
    -- unix ls -l command will give size. 
    -- Looks like ls -l gives the data fork size.
    set theDataSize to (do shell script "ls -l " & theItem & " | awk '{print $5}'")
    log "theDataSize = " & theDataSize
    set theRsrc to (POSIX path of theFile) & "/rsrc"
    log "theRsrc = " & theRsrc
    set theRsrc to quoted form of theRsrc
    log "theRsrc = " & theRsrc
    set theRsrcSize to (do shell script "ls -l " & theRsrc & " | awk '{print $5}'")
    log "theRsrcSize = " & theRsrcSize
    set output to do shell script "echo \"" & theDataSize & "+" & theRsrcSize & "\" | bc "
    log "combined data and resource size = " & output
    Here is what I get when I run the above script.
    tell current application
       choose file
          alias "Macintosh-HD:System Folder:Finder"
       info for alias "Macintosh-HD:System Folder:Finder"
          {name:"Finder", creation date:date "Tuesday, May 29, 2001 3:00:00 PM", modification date:date "Tuesday, May 29, 2001 3:00:00 PM", icon position:{1, 128}, size:2.439365E+6, folder:false, alias:false, package folder:false, visible:true, extension hidden:false, name extension:missing value, displayed name:"Finder", default application:alias "Macintosh-HD:Applications (Mac OS 9):Utilities:Assistants:Setup Assistant:Setup Assistant", kind:"Finder", file type:"FNDR", file creator:"MACS", type identifier:"dyn.agk8yqxwenk", locked:false, busy status:false, short version:"9.2", long version:"9.2, Copyright Apple Computer, Inc. 1983-2001"}
       (*fileSIZE = 2439365*)
    end tell
    tell application "Finder"
       get physical size of alias "Macintosh-HD:System Folder:Finder"
          2.445312E+6
       (*pSize = 2.445312E+6*)
       (*stringSize = 2445312*)
       (*theItem = '/System Folder/Finder'*)
    end tell
    tell current application
       do shell script "ls -l '/System Folder/Finder' | awk '{print $5}'"
          "1914636"
       (*theDataSize = 1914636*)
       (*theRsrc = /System Folder/Finder/rsrc*)
       (*theRsrc = '/System Folder/Finder/rsrc'*)
       do shell script "ls -l '/System Folder/Finder/rsrc' | awk '{print $5}'"
          "524729"
       (*theRsrcSize = 524729*)
       do shell script "echo \"1914636+524729\" | bc "
          "2439365"
       (*combined data and resource size = 2439365*)
    end tell
    Message was edited by: rccharles

  • Large file doesn't work in a message-splitting scenario

    Hello,
    I'm trying to measure the XI performance in a message-splitting scenario by having the file adapter to pull the xml file below and send it to XI to perform a simple "message split w/o BPM" and generate xml files for each record in another folder.   I tried 100, 500, 1000 records and they all turned out fine.  Once I tried the 2000-records case, the status of that message is WAITING in RWB and I checked sxmb_moni, the message is in "recorded for outbound messaging", I couldn't find any error.
    Is there some kinda of threshold that can be adjusted for large files?  Thank you.
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MultiXML xmlns:ns0="urn:himax:b2bi:poc:multiMapTest">
       <Record>
          <ID>1</ID>
          <FirstName>1</FirstName>
          <LastName>1</LastName>
       </Record>
    </ns0:MultiXML>

    Hello,
    The Queue ID is "XBTO1__0000".  I double-clicked it, it took me to the "qRFC Monitor (Inbound Queue) screen, and showed a XBT01_0000 queue with 3 Entries.
    I double-clicked on that, and it took me to another screen that showed a queue with same same and "Running" in Status.
    I double-clicked on that, and i saw three entries.  For the latest entry, its StatusText says "Transaction recorded".
    I double-clicked on that, it took me to Function Builder:Display SXMS_ASYNC_EXEC.
    What do I from here?

Maybe you are looking for