Freeze when writing large amount of data to iPod through USB

I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

Ex PC user...never had a problem.
Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
Apple should DEFINITELY fix this though. Unbelievable.
MacBook Pro 2.0   Mac OS X (10.4.6)  

Similar Messages

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Azure Cloud service fails when sent large amount of data

    This is the error;
    Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
    aborted by the server (possibly due to the service shutting down). See server logs for more details.
    Calls with smaller amounts of data work fine. Large amounts of data cause this error.
    How can I fix this??

    Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
    maxReceivedMessageSize.
    http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
    Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
    Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

  • Error when exporting large amount of data to Excel from Apex4

    Hi,
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.
    Thanks
    Sue

    Hi,
    >
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    >
    How? Application Builder > Data Workshop? Apex Page Process? (Packaged) procedure?
    >
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    >
    Have you changed your webserver in the process? Say moved from OHS to ApexListener?
    >
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • Airport Extreme Intermittent Network Interruption when Downloading Large Amounts of Data.

    I've had an Airport Extreme Base Station for about 2.5 years and have had no problems until the last 6 months.  I have my iMac and a PC directly connected through ethernet and another PC connected wirelessly.  I occasionally need to download very large data files that max out my download connection speed at about 2.5Mbs.  During these downloads, my entire network loses connection to the internet intermittently for between 2 and 8 seconds with a separation between connection losses at around 20-30 seconds each.  This includes the hard wired machines.  I've tested a download with a direct connection to my cable modem without incident.  The base station is causing the problem.  I've attempted to reset the Base Station with good results after reset, but then the problem simply returns after a while.  I've updated the firmware to latest version with no change. 
    Can anyone help me with the cause of the connection loss and a method of preventing it?  THIS IS NOT A WIRELESS PROBLEM.  I believe it has to do with the massive amount of data being handled.  Any help would be appreciated.

    Ok, did some more sniffing around and found this thread.
    https://discussions.apple.com/thread/2508959?start=0&tstart=0
    It seems that the AEBS has had a serious flaw for the last 6 years that Apple has been unable to address adequately.  Here is a portion of the log file.  It simply repeats the same log entries over and over.
    Mar 07 21:25:17
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:25:17
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:36:41
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted +0 seconds).
    Mar 07 21:55:08
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:08
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:55:32
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:33
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:59:47
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:24:53
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:24:53
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:36:42
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted -1 seconds).
    Mar 07 22:54:37
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:54:37
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Anyone have any ideas why this is happening?

  • Writing large amount of data to db

    I have set the max size of berkeley db to 1G. But the data that i want to cache might go over 1G. I am just wondering how berkeley DB handles this case. Does it automatically write it to a seconday storage or just throws an exception? I am looking for a configuration writing the data to a secondary storage or something instead of throwing an error and being unable to write to it.
    does anybody know?
    Thanks.

    Hi,
    Berkeley DB Java Edition (JE), like most databases, performs best when database objects are found in its cache. However, when database objects can no longer fit within cache, they're evicted to disk; JE is not a strictly in-memory database. The cache eviction algorithm is the way in which JE decides to remove objects from the cache and can be a useful policy to tune. Please see the FAQ item here: http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#35 for how to tune JE's cache management policy for your application's access pattern.
    Let me know if you have further questions.
    Regards,
    Chao

  • NMH305 dies when copying large amounts of data to it

    I have an NMH305 still set up with the single original 500GB drive.
    I have an old 10/100 3COM rackmount switch (the old white one) uplinked to my Netgear WGR614v7 wireless router.  I had the NAS plugged into the 3COM switch and everything worked flawlessly.  Only problem was it was only running at 100m.
    I recently purchased a TRENDnet TEG-S80g 10/100/1000 'green' switch.  I basically replaced the 3com with this switch.  To test the 1g speeds, I tried a simple drag & drop of about 4g worth of pics to the NAS on a mapped drive.  After about 2-3 seconds, the NAS dropped and Explorer said it was no longer accessible.  I could ping it, but the Flash UI was stalled.
    If I waited several minutes, it could access it again.  I logged into the Flash UI and upgraded to the latest firmware, but had the same problem.
    I plugged the NAS directly into the Netgear router and transfered files across the wireless without issue.  I plugged it back into the green switch and it dropped after about 6-10 pics transfered.
    I totally bypassed the switch and plugged it directly into my computer.  Verified I can ping & log in to the Flash UI, then tried to copy files and it died again.
    It seems to only happend when running at 1g links speeds.  The max transfer I was able to get was about 10mbps, but I'm assuming that's limited by the drive write speeds & controllers.
    Anyone ran into this before?
    TIA!

    Hi cougar694u,
    You may check this review "click here". This is a thorough review about the Media Hub's Write and Read throughput vs. File Size - 1000 Mbps LAN.
    Cheers

  • Osx server crashes when copying large amount of data

    Ok. I have set up a mac os x server on a G4 Dual 867. Set to standalone server. The only services running are, VPN, AFP, DNS (I am pretty sure the DNS is set up correctly). I have about 3 Firewire drives and 2 USB 2.0 drives hooked up to it.
    When I try and copy roughly 230GB from one drive to another, it either just stops in the middle or CRASHES the server! I can't see anything out of the ordinary in the logs, though I am a newbie.
    I am stumped. Could this be hardware related? I just did a complete fresh install of os x server!

    This could be most anything, whether a disk error, a non-compliant device, a firewire error (I've had FireWire drivers tip over Mac OS X with a kernel panic; if the cable falls out at an inopportune moment when recording in GarageBand, toes up it all goes), to a memory error. This could also be a software error. This could be a FireWire device(s) that's simply drawing too much power.
    Try different combinations of drives, and replace one or more of these drives with another; start a sequence of elimination targeting the drives.
    Here's what Apple lists about kernel panics as an intro; it's details from the panic log that'll most probably be interesting...
    http://docs.info.apple.com/article.html?artnum=106228
    With some idea of which code is failing, it might be feasible to find a related discussion.
    A recent study out of CERN found three hard disk errors per terabyte of storage, so a clean install is becoming more a game of moving the errors around than actually fixing anything. FWIW.

  • Finder issues when copying large amount of files to external drive

    When copying large amount of data over firewire 800, finder gives me an error that a file is in use and locks the drive up. I have to force eject. When I reopen the drive, there are a bunch of 0kb files sitting in the directory that did not get copied over. This is happens on multiple drives. I've attached a screen shot of what things look like when I reopen the drive after forcing an eject. Sometime I have to relaunch finder to get back up and running correctly. I've repaired permissions for what it's worth.
    10.6.8, by the way, 2.93 12-core, 48gb of ram, fully up to date. This has been happening for a long time, just now trying to find a solution

    Scott Oliphant wrote:
    iomega, lacie, 500GB, 1TB, etc, seems to be drive independent. I've formatted and started over with several of the drives and same thing. If I copy the files over in smaller chunks (say, 70GB) as opposed to 600GB, the problem does not happen. It's like finder is holding on to some of the info when it puts it's "ghost" on the destination drive before it's copied over and keeping the file locked when it tries to write over it.
    This may be a stretch since I have no experience with iomega and no recent experience with LaCie drives, but the different results if transfers are large or small may be a tip-off.
    I ran into something similar with Seagate GoFlex drives and the problem was heat. Virtually none of these drives are ventilated properly (i.e, no fans and not much, if any, air flow) and with extended use, they get really hot and start to generate errors. Seagate's solution is to shut the drive down when not actually in use, which doesn't always play nice with Macs. Your drives may use a different technique for temperature control, or maybe none at all. Relatively small data transfers will allow the drives to recover; very large transfers won't, and to make things worse, as the drive heats up, the transfer rate will often slow down because of the errors. That can be seen if you leave Activity Monitor open and watch the transfer rate over time (a method which Seagate tech support said was worthless because Activity Monitor was unreliable and GoFlex drives had no heat problem).
    If that's what's wrong, there really isn't any solution except using the smaller chunks of data which you've found works.

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

  • ERROR MESSAGE WHEN DOING SIMPLE QUERY TO RETRIEVE LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

Maybe you are looking for

  • Check For Insert

    Hi Friends, I've created a Sql report with checkbox based on a view, I want to insert records into another table when i check the checkbox. Am Using Apex 4.1 on Oracle 10g. This is my query SELECT APEX_ITEM.CHECKBOX(1,REQUEST_ID,'UNCHECKED') "CHECK_T

  • HT1692 Do I need outlook to sync my iPhone and ipad calendar?

    Do you need to setup outlook to be able to sync your ipad and iPhone calendars?

  • Ipad/Mac not working with Bose Soundlink after upgrade to Mountain Lion

    Since upgrading to Mountain Lion, I can not play through my Bose Soundlink from my Mac and iPad? It finds the device via Bluetooth, connects, but will not play.Anyone else?

  • Ipod displays charging icon when not plugged in

    Hey, i have had a 30GB ipod video for almost a month now but recently it has had this problem. When i unplug my ipod from the computer, the charging icons do not turn into the default battery icon. It still says its charging. As far as i know it does

  • Help! Compiling error

    Hello guys! I',m having this bizare compiling error in the following code: Stirng test = "Test ; test ; test"; String[] tests = test.split(";"); System.out.printLine(tests.lengh());First off it highlights tests.length() and says in cannot find symbol