Storing large file on card

I need to store a small JPG on card so that it can be accessed using the ISO 7816 file system commands. Seeing that the APDU buffer is 256 bytes, the only thing I could think of was to break it up into 256 byte records under a file. Would this be the only way to do it given that it has to be accessible using READ RECORD command?

I've answered this a few times before.
You will have to "loop" your command to store the complete image. In the applet you'll need a placeholder to know where in the byte array to store the next set of data.
Similar to LOAD, STORE DATA Global Platform commands.

Similar Messages

  • Storing image files on card (VVV IMP)

    how to store image file on java card???
    I want to store an image file on java card..........
    How to proceed for it???
    What steps to be followed?
    PLZ help us......
    Thanks in adv.

    there is my answer to another post here ...

  • Storing large file in Oracle

    Hi,
    We have a requirement where we need to persist file (upto 100MB or more) into the database.
    We are using Hibernate as the ORM.
    The approaches that we are considering are:
    1. use a BLOB field to upload the file.
    What I am doing here is
    SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
    Session session = sessionFactory.openSession();
    Transaction txn = session.beginTransaction();
    Bean bean = new Bean();
    bean.setFilename("testdata1.doc");
    byte[] dummy={0};
    bean.setFilecontent(Hibernate.createBlob(dummy));
    session.save(bean);
    session.flush();
    session.refresh(bean);
    OutputStream out = ((oracle.sql.BLOB)bean.getFilecontent()).setBinaryStream(1);
    FileInputStream fis = new FileInputStream(new File("testdata1.doc"));
    int size=1024;
    byte[] bytes= new byte[size];
    int res = fis.read(bytes);
    while(res!=-1){
    out.write((bytes));
    System.err.println("+++"+size+"/"+fis.available());
    bytes= new byte[size];
    res = fis.read(bytes);
    fis.close();
    session.save(bean);
    session.flush();
    txn.commit();
    session.close();
    File content in bean is a java.sql.Blob
    Then I read this out of the DB and write to a file as shown here
    SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
    Session session = sessionFactory.openSession();
    Transaction txn = session.beginTransaction();
    Criteria c = session.createCriteria(Bean.class);
    c.add(Restrictions.like("filename", "testdata1.doc"));
    Bean bean = (Bean)c.uniqueResult();
    InputStream in = ((oracle.sql.BLOB)bean.getFilecontent()).getBinaryStream();
    FileOutputStream fos = new FileOutputStream("retrieved_testdata.doc");
    int size=1024;
    byte[] bytes = new byte[size];
    int res = in.read(bytes);
    while(res!=-1){
    fos.write((bytes));
    System.err.println("---"+size+"/"+in.available());
    bytes = new byte[size];
    res = in.read(bytes);
    fos.close();
    session.delete(bean);
    session.flush();
    txn.commit();
    session.close();
    I see that the retrieved file is 6KB less than the original and it wont open in word :(
    I tried Base64 encoding the bytes before writing them to the BLOB, still I can see a discrepancy in the file size and it wont open.
    The other approach that we tried was to read the file, Base64 the bytes and put them into a CLOB/NCLOB field. In that case when I retrieve the file out of the database and write it out, the file is smaller by a few hundred KB and the file wont open.
    My Oracle database is Oracle 10.2.0.2 64 bit on RHEL
    I am using Oracle 10.2.0.4 drivers to be compatible with the application from where we retrieve some other data.
    We are using Java 1.5
    Am I doing something wrong?
    What is the correct way to do this?
    UPDATE:_
    I am able to make it work using
    bean.setFilecontent(Hibernate.createBlob(fis));
    But there might be a case where we have to write in sets of byte arrays. If it is not possible to make it work writing in sets of byte arrays, then we could look at enforcing this.
    Thanks
    Hari
    Edited by: Wilddev007 on Feb 10, 2009 11:47 AM

    This seems like a seriously bad idea. What is the benefit of 'saving a 100mb file'
    in the DBMS as opposed to copying it via ftp to some remote machine or other
    disk?

  • Some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file mana

    some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file manager, but it still is not in the music library in play music.

    Cyndi6858, help is here! We'd be happy to help figure this out. Just to be sure though, the Droid Maxx should not have an SD card. Is this the Droid Razr Maxx? How did you add the music to the device? Are you able to see the files and folders located on the SD card or device when plugged in?
    Thanks,
    MichelleH_VZW
    Follow us on Twitter @VZWSupport

  • USB key ejecting itself when transferring large files

    I have a LG 16G USB key formatted in MacOS extended (journaled), I've been encountering a particular issue when I try to transfer files taken on another computer and copying them from the key to my Macbook pro ( OSX 10.6.2). after it start transferring it ejects itself with the "the disk was not ejected properly" message, then reappears right away in the finder...
    the problem doesn't present itself when I copy a file from my macbook pro to the key, and then try transferring it back to my computer. and I've tried this with both small files (under 200mb) and large files (over 4G).
    I've also noticed the transfer rate is very slow when this happens.
    so far the problem only presents itself when I transfer files from one of the MacPro's in my university's studios, but since I work a lot on these machine, this is becoming quite a problem. I don't have the complete specs of these machines, they all run on Leopard, probably 10.5.something. not SL... perhaps that's the problem?
    I've seen similar post on the forum concerning this issue with TimeMachine backup drives, suggesting it had something to do with the sleep preferences, however I've tried all the potential solutions from these post and the problem persists.
    So far the only solution I've found has been to transfer the files to my girlfriend's macbook (running on tiger), and then transferring the files in target mode from her computer to mine... quite inconvenient...
    I have a feeling this might be due to the formatting of my key, I have a 2G USB key formatted in MS-DOS (FAT32) and have had no problems with it so far. The reason I formatted in Mac OS extended is simple, I work in the video field and with HD I find myself often moving around single files larger then 4GB, which I've come to understand isn't possible in FAT 32.
    I'd like to know how to resolve this, and especially like to know if it is indeed a format issue, since I'm soon going to acquire a new external hard drive for the sole purpose of storing my increasing large media files and would like to know how to format it.

    Hi,
    I have an external USB Card reader (indeed - two different readers), which have the same problems with self-ejecting disks. Every transfer of data from SD card from my camera is the pain now. There is many different themes related to the self-"the disk was not ejected properly" situation, but no working solution now.

  • How do I find where my larger files are all at Once

    Like all I am getting close to having a full HD. Is there a way to tell where all my large files are all at the same time without having to go to each folder, file, application
    Thx Much

    Thx Much, I am aware of all that, just wondering is there is any software freeware, widget that could tell me where all my very large files are at the same time with having to go through all the folders/files/libraries..
    It is amazing that many large files are stored in the strangest places.
    Jack

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • Copying large file sets to external drives hangs copy process

    Hi all,
    Goal: to move large media file libraries for iTunes, iPhoto, and iMovie to external drives. Will move this drive as a media drive for a new iMac 2013. I am attempting to consolidate many old drives over the years and consolidate to newer and larger drives.
    Hardware: moving from a Mac Pro 2010 to variety of USB and other drives for use with a 2013 iMac.  The example below is from the boot drive of the Mac Pro. Today, the target drive was a 3 TB Seagate GoFlex ? USB 3 drive formatted as HFS+ Journaled. All drives are this format. I was using the Seagate drive on both the MacPro USB 2 and the iMac USB 3. I also use a NitroAV Firewire and USB hub to connect 3-4 USB and FW drives to the Mac Pro.
    OS: Mac OS X 10.9.1 on Mac Pro 2010
    Problem: Today--trying to copy large file sets such as iTunes, iPhoto libs, iMovie events from internal Mac drives to external drive(s) will hang the copy process (forever). This seems to mostly happen with very large batches of files: for example, an entire folder of iMovie events, the iTunes library; the iPhoto library. Symptom is that the process starts and then hangs at a variety of different points, never completing the copy. Requires a force quit of Finder and then a hard power reboot of the Mac. Recent examples today were (a) a hang at 3 Gb for a 72 Gb iTunes file; (b) hang at 13 Gb for same 72 Gb iTunes file; (c) hang at 61 Gb for a 290 Gb iPhoto file. In the past, I have had similar drive-copying issues from a variety of USB 2, USB 3 and FW drives (old and new) mostly on the Mac Pro 2010. The libraries and programs seem to run fine with no errors. Small folder copying is rarely an issue. Drives are not making weird noises. Drives were checked for permissions and repairs. Early trip to Genius Bar did not find any hardware issues on the internal drives.
    I seem to get these "dropoff" of hard drives unmounting themselves and other drive-copy hangs more often than I should. These drives seem to be ok much of the time but they do drop off here and there.
    Attempted solutions today: (1) Turned off all networking on Mac -- Ethernet and WiFi. This appeared to work and allowed the 72 Gb iTunes file to fully copy without an issue. However, on the next several attempts to copy the iPhoto and the hangs returned (at 16 and then 61 Gb) with no additional workarounds. (2) Restart changes the amount of copying per instance but still hangs. (3) Last line of a crash report said "Thunderbolt" but the Mac Pro had no Thunderbolt or Mini Display Port. I did format the Seagate drive on the new iMac that has Thunderbolt. ???
    Related threads were slightly different. Any thoughts or solutions would be appreciated. Better copy software than Apple's Finder? I want the new Mac to be clean and thus did not do data migration. Should I do that only for the iPhoto library? I'm stumped.
    It seems like more and more people will need to large media file sets to external drives as they load up more and more iPhone movies (my thing) and buy new Macs with smaller Flash storage. Why can't the copy process just "skip" the parts of the thing it can't copy and continue the process? Put an X on the photos/movies that didn't make it?
    Thanks -- John

    I'm having a similar problem.  I'm using a MacBook Pro 2012 with a 500GB SSD as the main drive, 1TB internal drive (removed the optical drive), and also tried running from a Sandisk Ultra 64GB Micro SDXC card with the beta version of Mavericks.
    I have a HUGE 1TB Final Cut Pro library that I need to get off my LaCie Thunderbolt drive and moved to a 3TB WD USB 3.0 drive.  Every time I've tried to copy it the process would hang at some point, roughly 20% of the way through, then my MacBook would eventually restart on its own.  No luck getting the file copied.  Now I'm trying to create a disk image using disk utility to get the file from the Thunderbolt drive and saved to the 3TB WD drive. It's been running for half an hour so far and appears that it could take as long a 5 hours to complete.
    Doing the copy via disk image was a shot in the dark and I'm not sure how well it will work if I need to actually use the files again. I'll post my results after I see what's happened.

  • Error code -36 when copying large file

    Aperture (Apple's pro photo software) stores all of it's data (pictures, metadata, etc) in what appears to be a large file called the Library. In fact, it's a large folder with all of the contents hidden so you don't mess around with it. I back this file up periodically to both an internal and external drive. After a hard drive failure and recreating my whole machine, everything seemed fine and Aperture is working normally. I have tried to drag copy my Aperture Library (rebuilt from my Aperture vault, a back up file created by Aperture) file to my internal second drive, a process that takes about 2 hours (300 Gb). The drive I'm copying to has over 451 GB available and was freshly formatted during this rebuild process. A little more that half way through the drag copy I get the following message.
    "The Finder cannot complete the operation because some data in “Aperture Library” could not be read or written, (Error code -36)"
    Here's what I've done to so far.
    1. Aperture has a function to repair permissions and rebuild the file, done both several times.
    2. I found (by looking at all 20,000 plus pictures) some corrupt picture files and deleted them, then repeted step one.
    3. Tried a duplicate file on the same drive, same error.
    Thoughts? Help? I can't find anything on a Mac OS error -36.
    By the way, I'm running Mac OS 10.5.8 and Aperture 2.1.4 on a Quad Core G5.

    This what it means:
    Type -36 error (I/O Errors (bummers)
    This file is having difficulty while either reading from the drive or writing to the drive. The file
    may have been improperly written data to the drive or the hard drive or disk may be damaged.
    This is almost always indicative of a media error (hard error on the disk). Sometimes (rarely) it is transient.
    Solutions: Try copying the file to another drive. Use a disk recovery software, such as Disk First Aid to examine the disk. You can try rebooting with all extensions off. Once in a while this will allow you to read the data. The file in question should be restored from a backup that was stored on a different disk. Regular backups can reduce the time to recover from this error.

  • Copying large files to internal HD works for a bit, then freezes

    Hello fellow Mac users,
    I have a fairly heavily modified G4 Digital Audio. It was originally the 533Mhz model, then upgraded to a Dual 1.2. All in all, it is a workhorse, but I have a very strange issue that seems to be creeping up - when I copy files, particularly large ones (I originally thought greater that 4GB, but now it has happened to files that are less than 3GB). Files will start to copy just fine, then the progress bar will stop and the file transfer freezes. Sometimes I get the spinning pinwheel, sometimes not. When I press 'stop', it does nothing and I end up having to crash the machine and reboot.
    I recently installed a Sonnet ATA/133 card and a large 300GB drive. The whole point of having a large drive like this is to copy large files, right? Seems that whenever I try to copy files to the new drive, the process begins and things are very quick, until it hits a particular file and then just stops.
    There doesn't seem to be anything wrong with either the Sonnet ATA133 or the drive, necessarily. I've run Disk Warrior, repaired permissions, zapped PRAM and NVRAM. I've tried eliminating devices and nothing seems to really work. The new drive is a Maxtor 300GB. The jumper setting was set to 'CS Enabled'. I'm not really sure what that means. I then changed it to Master and made sure that is was the master on the ATA bus, but the problem persists either way.
    I'm wondering if anyone else has had trouble copying large files (I'm sure many of you have installed large drives on a G4), if this is an anomoly, or if any of you have any advice.
    Cheers,
    Shane

    Update as to how I got it to work:
    I did get it to work... although it's still a little perplexing as to exactly why:
    I mentioned previously that I'd switched the jumpers on the HD's from 'CS enabled' to 'Master'. This actually made things worse. Files started freezing after only copying 100MB or so. So I switched them back. Once I switched the jumpers back to 'CS enabled', things started working. I even copied over an 86GB MP3 folder in under an hour! This solution is odd, as I'm sure that this is how I had it set previously. However, maybe something wasn't connecting right or the computer was finally in the mood to do it... I don't know.. but I am grateful that it's working...
    Shane

  • Trying to link large file to email. keep getting asked for master pw to security device. I have changed master pw on firefox and TB but still same response.

    I have tried over and over to send this large file. I've never synced before so i got that all set up. Then asking for master passwork to security device. Don't think I had ever set up an account so did that and created password. When trying to send the email I ask to link it goes to 'box' and I authorize thunderbird to use use it. then it ask for master password and I type that in and it doesn't work. I have reset master pw for firefox and thunderbird and still can't get it to work. I really don't know what I am doing here so any help is greatly appreciated. Thanks!

    In my previous response, I gave a link to this website page:
    http://kb.mozillazine.org/Master_password
    It contained some information about what to do if TB keeps asking for a 'Master Password' which was never set. This was an old bug that was resolved years ago, so it is surprising you are getting it assuming the Thunderbird Master Password was never set.
    However, your comments are confusing because they are conflicting; in your question you say:
    '' I have changed master pw on firefox and TB ''
    This implies you set a master password in Thunderbird and changed it, but in last comment you say you never defined it:
    '' I have used the Firefox direction to eliminate the master password, which I never defined in the first place,''
    I'm not sure what you mean by 'I have used the Firefox direction' - perhaps you mean the link I gave. As of course we are not talking about firefox :)
    So, at this point, it is hard to know what you did because you are posting conflicting information.
    Using info at the link, I presume you tried this:
    quote
    ''Thunderbird: Choose Tools -> Error Console, paste the expression: openDialog("chrome://pippki/content/resetpassword.xul") and press the Evaluate button. That will open a dialog asking you if you want to reset your password. ''
    quote
    ''If that doesn't work exit Thunderbird and delete the '''key3.db''' file in your profile.''
    Did you do this? You do not say if you did this.
    At no point does it say uninstall Mozilla Thunderbird program, so I do not know why you would do it. Uninstalling the Program has nothing to do with your Profile for very obvious reasons.
    '''More clearer instructions on how to delete the '''key3.db''' file.'''
    Make hidden files and folders visible:
    * http://kb.mozillazine.org/Show_hidden_files_and_folders
    The AppData folder is folder is a hidden folder; to show hidden folders, open a Windows Explorer window and choose "Organize → Folder and Search Options → Folder Options → View (tab) → Show hidden files and folders".
    Assuming your Profile was stored in the default place, it can be located here:
    * C:\Users\Windows user name\AppData\Roaming\Thunderbird\Profiles\Profile name\
    Thunderbird must be closed before doing anything in the Profile folders.
    If you have since reinstalled Thunderbird, you can also use it to get quick access to your Profile folders.
    In Thunderbird
    * Help > Troubleshooting Information
    * click on 'show folder' button
    * '''Close/Exit Thunderbird - this is important'''
    * Look for and delete this file : '''key3.db''' file.
    * Restart Thunderbird.

  • Transfer very large file Mac to Mac

    Hi,
    I created an iMovie (about 17GB in size) on my ibook G4. Now I want to transfer it to my Mac Mini where I am storing media files. Thus far, my iPod has been large enough for the job, but it is a 15GB version and won't do it.
    I have heard firewire could work, but feel like I read something about "master" and "slave", and I am afraid to try something without more info. I don't want to lose data on either computer until I have finished the transfer.
    Should I use firewire? Should I use an ethernet cable? Is there some obvious way to create a home network (the Mini doesn't have the wireless capability, although my Verizon modem/router does)?
    Any help would be appreciated, my iBook is just about full and I need to get this iMovie outta there!
    Thanks,
    Bryan

    You can do it via Ethernet or Firewire. Firewire will be faster.
    By Ethernet:
    -Connect the two machines with an Ethernet Cable
    -Turn on File Sharing on the Mini.
    -Log onto the Mini from the iBook.
    -Drag and drop the file iBook->Mini.
    By Firewire:
    -Connect the two machines with a Firewire cable.
    -Restart the Mini while holding down the "T" key.
    -The Mini's hard drive will mount on the iBook.
    -Drag and drop the file iBook->Mini.

  • Is it best to upload HD movies from a camcorder to iMovie or iPhoto.  iMovie gives the option for very large file sizes - presumably it is best to use this file size because the HD movie is then at its best quality?

    Is it best to upload hd movie to iPhoto or iMovie?  iMovie seems to store the movie in much larger files - presumably this is preferable as a larger file is better quality?

    Generally it is if you're comparing identical compressors & resolutions but there's something else happening here.  If you're worried about quality degrading; check the original file details on the camera (or card) either with Get Info or by opening in QuickTime and showing info. You should find the iPhoto version (reveal in a Finder) is a straight copy.  You can't really increase image quality of a movie (barring a few tricks) by increasing file size but Apple editing products create a more "scrub-able" intermediate file which is quite large.
    Good luck and happy editing.

  • Ok to store large files on desktop?

    Ok to store large files on desktop?
    Are there consequences (running slow) when storing a large files or numerous files on the desktop?
    Sometimes I use the desktop to stores photos until I complete a project. There are other files on the desktop that I could remove if it causes the computer to slow down. If not, I'll keep them there because it's easy and convenient.
    I have 4G of memory.

    Hi Junk,
    I can't think of any consequences to storing your large files on your desktop. After all, from your system's point of view, your desktop is just another folder. However, there is some system consequence to storing lots of files there, as it will decrease performance. But with 4GB of memory, I'm not sure how much of a difference you might see. And there's a big gap between "I'm storing 15 photos on my desktop" and "I can no longer see my hard disk icon in this mess." If the idea of even potential system slowdowns bothers you, create a work folder and leave it on your desktop to drop all of those photos in. If you're not getting too insane with the amount, though, I'm sure you'll be fine.
    Hope that helps!
    —Hazy

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

Maybe you are looking for