Numbers 09 is slow dealing with relatively large files

As some of you might know, Excel 2008 is painfully slow when opening relatively large files (about 12000 rows and 28 columns) that have simple 2D x-y charts. FYI, Excel 2004 doesn't have the same problem (and Excel 2003 in the XP world is even better). I purchased iWork09 hoping that Numbers 09 will help, but unfortunately I have the same problem. iWok09 takes more than 5 minutes to open the file - something the older versions of Excel could do in seconds. When the file opens up, it is impossible to manipulate it. I have a MacBook with 2.4 GHz Intel Core 2 Duo and 4GB of RAM running OS x (version 10.5.6).
Has anybody else experienced the same problem? If so, is there a bug in iWork09, or is it because it isn't meant to deal with large files.
I appreciate your response.

Numbers '08 was very slow.
Numbers '09 is not so slow but it's not running like old fashioned spreadsheets.
I continue to think that it's a side effect of the use of xml to describe thr document.
We may hope that developers will discover programming tips able to fasten the beast.
Once again, I really don't understand why users may buy an application before testing it with the FREE 30 days demo available from Apple's Web page.
Yvan KOENIG (from FRANCE dimanche 8 mars 2009 13:13:18)

Similar Messages

  • Howto deal with multiple source files having the same filename...?

    Ahoi again.
    I'm currently trying to make a package for the recent version of subversive for Eclipse Ganymede and I'm almost finished.
    Some time ago the svn.connector components have been split from the official subversive distribution and have to be packed/packaged extra. And here is where my problem arises.
    The svn.connector consists (among other things) of two files which are named the same:
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/features/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/plugins/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    At the moment makepkg downloads the first one, looks at its cache, and thinks that it already has the second file, too, because it has the same name. As a result, I can neither fetch both files nor use both of them in the build()-function...
    Are there currently any mechanisms in makepkg to deal with multiple source files having the same name?
    The only solution I see at the moment would be to only include the first file in the source array, install it in the build()-function and then manually download the second one via wget and install it after that (AKA Quick & Dirty).
    But of course I would prefer a nicer solution to this problem if possible. ^^
    TIA!
    G_Syme

    Allan wrote:I think you should file a bug report asking for a way to deal with this (but I'm not sure how to fix this at the moment...)
    OK, I've filed a bug report and have also included a suggestion how to solve this problem.

  • Can SQL*PLUS deal with 'flat ASCII files' (input ) in UNIX ? and how?

    Can SQL*PLUS deal with 'flat ASCII files' (input ) in UNIX ? and how?

    No, but PL/SQL can. Look at utl_file.
    John Alexander www.summitsoftwaredesign.com

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Rtorrent: issue with DL large files ( 4GB) to NTFS

    Using latest rtorrent/rutorrent:  every time I DL a large >4GB file with rtorrent to the NTFS drive it shows it downloading the whole file MB by MB, but when I go to hash check (via rutorrent), there's only a partial percentage DLded.  Say if I DL a 4.36 GB .mkv file, I hash check and only 10% is done ~400MB or about 6 minutes of the video.
    Oddly:
    If I do ls -l --block-size=MB, the file shows normal 4GB+ size.
    If I do ls -s, file appears to be only a few hundred MB.
    If I DL to my root ext4 drive, there's no issue unless I change the save path of the torrent in rutorrent and elect for the files to be moved to the NTFS drive.
    I've transferred large files with 'cp' from another NTFS to this NTFS with no issue.
    I thought the problem was rutorrent plugin autotools, but I removed it from my plugins folder and the problem persists.
    Permissions:
    I have all the relevant directories in /etc/php.ini open_basedir:  the user/session, the mounted drive, and /srv/http/rutorrent
    I did #chown -R http:http /srv/http/rutorrent
    http is a member of the group with NTFS drive access
    the rutorrent/tmp directory is changed to be within /srv/http/rutorrent
    This is a pesky issue that I didn't have with my last arch install using the same general set up.
    I DL to an NTFS formatted drive and mount it the same way I did before: ntfs-3g defaults,auto,uid=XXXX,gid=XXXX,dmask=027,fmask=037
    My rtorrent user is the uid (owner) and is in the group that has access to the drive (along with my audio server user and http)
    I run rtorrent in screen as the rtorrent user
    I imagine this is an issue with rutorrent?
    Any tips before I reformat the whole 4TB to ext4?
    EDIT:  the issue is definitely isolated to rtorrent.  I manually added large size torrent using rtorrent, it completed.  I then hash checked (in rtorrent) and again only ~10% was shown as complete.
    EDIT2:  It is most definitely not a permissions issue.  Tried this again without mount permissions options and the same thing happens.
    Last edited by beerhoof (2015-01-30 22:05:57)

    I'm afraid I don't understand the question.
    7.2 now correctly parses the Canon XF .CIF sidecar files to determine whether the media is supposed to be spanned or not.  This has been a feature request that has been finally addressed to work correctly.
    (It also was there in 7.1 & previous, but had limitations:  the performance wasn't as good, there had been issues in the past with audio pops at cut points, and it required that the Canon XF folder structure remain intact, ie if you copied the media to a flattened folder structure, it would fail to do the spanning correctly.)
    If you are looking for a means to disable the automatic spanning, simply removing the .CIF files will achieve that.  Although i'm not sure I understand why you're looking to do that.  Most people *want* spanning to happen automatically, otherwise you're forced to manually sync spanned media segments by hand. 

  • Could java deal with shortcut/link files ?

    Hi all,
    We could often use the shortcut(.lnk) or link files on Windows or Unix.
    Could Java class deal with these files directly ? That is, could it got
    the target file of the shortcut link files? So that, we could use the target file exactly the same with other normal files.
    Any comments and help are welcome. Thanks.
    -GeorgeZ.

    Be aware that MS .lnk files are extremely different from what you use in Unix. In Unix, the OS resolves links automatically, and the application never even knows they are there. In Windows, this is absolutely not the case, and it takes a bit of work to get the windows shell to tell you where the link is pointing.
    If you absolutely have to dereference .lnk files in Java you'll need to use JNI. The JNI interface will be the absolute easiest part of this, though. Getting a resolved .lnk in C++ is a major pain in the neck (about 50 lines of code).
    - K

  • Help with download large file (~50mb) from http protocol

    I'm just using the default HttpURLConnection to download a large file from a server, I can never finish downloading before I get this exception:
    java.net.SocketException: Connection reset
         at java.net.SocketInputStream.read(SocketInputStream.java:168)
         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
         at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
         at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
         at sun.net.www.MeteredStream.read(MeteredStream.java:116)
         at java.io.FilterInputStream.read(FilterInputStream.java:116)
         at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2446)
         at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2441)
            ...The file is on IIS web server, its a static file, not a server script, web service, or anything special. I usually can download 10-30 mb before this exception occurs. I'm using java 1.6.0_11 on windows xp, anyone have suggestions or experience downloading large files with java?

    Thank you everyone for their suggestions. I tried wget, and it worked fine, but I tried a couple of different clients and they were failing too. I assume wget has resume capabilities, so I debugged the server and found the issue. It was a connection problem on the server end.

  • Deal with missing / renamed files via COM interface

    Hello,
    I'm trying to find some way to programmatically identify missing files in iTunes, including where iTunes is looking for the file, and if possible to point iTunes to the correct path. It's very straightforward to detect missing files, the original path could probably be obtained by analysing the iTunes xml file, but I can find no programmatic way of telling iTunes where to look. Simply deleting the missing file and then adding a new file from the new location is not acceptable.
    The reason for wanting to do this is very simple. I've a custom written backend music library management system that presents the library to iTunes as a set of .m4a files on a read only network share. From time to time, new files will appear, existing files will be updated or renamed, and files may be deleted. What I'd like to do is to write some simple vbscript that will keep iTunes in sync with this set of files.
    Most of the functionality it easy. The difficulty comes with dealing with renamed files. I can identify missing files by looking for those with a null location. If I can then find out what the file was previously called, I can then quite readily determine what it will have been renamed to. At this point however, I can find no way to update iTunes. As previously mentioned, just removing and re-adding the file isn't acceptable, as it will then disappear from any devices synced with iTunes.
    Does anyone have any clever ideas how to achieve this. The best idea I've come up with is to brute force the .itl file, and deal directly with that. I'd really rather not though, and it'll be a lot of effort, and may well spontaneously stop working. I'd also like to achieve the same effect of a mac platform, probably with Applescript, but am having basically the same issues.
    Regards,
    Chris
    The reas

    Christopher,
    I, too, wish this could be done. Unfortunately it looks like Apple does not currently have this capability (i.e. the Location property is read-only). I've submitted an enhancement request for this feature (as I'm sure many others have), but this appears to be by design. It appears you have already figured out the only workaround I'm aware of -- adding the file again (and optionally copying the properties from the old file to the new one).
    But by all means, please submit a bug report at http://bugreport.apple.com. Maybe if enough people gripe it will get implemented. You'll need an ADC account to submit a report via that link. If you don't have an ADC account already, you can get a free "ADC Online" account (or, of course, buy a higher-level account if you want).
    If you don't want to get an ADC account, you can use the Apple feedback page (http://www.apple.com/feedback) -- but I think you'll get a better response as a developer via the bugreport site.
    BTW, if you find good documentation on the iTunes Library file, I'd like to see it! I can't find any anywhere. Of course, if you figure it out and you don't want to share that's up to you...

  • How to Deal with a Large Form

    Hello all. I have created a very large form using LiveCycle, which utilizes quite a bit of scripting. Unfortunately, as more and more data is added, the form becomes slower and slower, to the point where a user spends more time waiting than actually filling out the form. This is clearly unacceptable, so I'm looking to remedy this. One thought I had was to split the form up into several sections (the format of the form allows this without trouble). I have a few concerns related to this. First, for the people processing the form on the receiving end, processing several forms instead of one is several times as much work and several times as much of a hassle. Second, it is clearly less convenient to distribute several forms than it is to distribute one, so if I were to do this I would be looking for a way to bundle them together (perhaps a PDF Portfolio?). Third, for ease of use I would want to find some way that a user could simply click on a link (or button or whatever) to move on to the second form after finishing the first. If there were some way to combine the separate parts back together after they had been filled out and submit them as one entity, that would seem to solve the first problem, but I have no idea how one might go about doing that. As I mentioned perhaps a PDF Portfolio is the solution to my second question, but I've never worked with those and don't know if they would be suitable. Of course, if there was a way to speed up this form directly that would solve all of these problems in one fell swoop. If anyone is willing to take a look at this form I'd be glad to e-mail it to them (I don't feel comfortable posting it in a public location at the moment).
    Thank you all very much.
    ===========================================
    Update: I just read a different post below about the "remerge" command causing a form to slow down. I used xfa.form.recalculate(1) all over the place in my form. Is it possible this is causing the slowdown?

    Send the form to LiveCycle8@gmail,com and I will have a look when I get a chance.
    Paul

  • Dealing with KeyStore larger than available memory.

    Hi,
    I'm manipulating a JCEKS/SunJCE KeyStore that has grown larger than available memory on the machine. I need to be able to quickly lookup/create/and sometimes delete keys. Splitting into multiple KeyStores and loading/unloading based on which one the particular request needs isn't ideal.
    Can anyone recommend a file backed KeyStore that doesn't depend on loading the entire file into memory to work with the KeyStore? Or perhaps a different way of using the existing framework?
    Thanks,
    Niall

    You might check the diffferent providers (and ask their developers about it) to see if you can find one; they should be using BER encoding and not DER encoding of the ASN1 structures. In that case the provider is able to read entries and parse through to the target entry (PILE) on demand but you will have a "pile" version which will make your performace pay for it. If somebody offers that, there sure should be some caching and enhancements on the KeyStore implementation not to suck on random searches.
    Start your tests with Bouncycastle provider but I remember, in 2001, certificates generated by the security provider of jcsi (later wedgetail and a part of Quest [Vintela]) were BER encoded. It does not necessarily mean, that they use BER for all constructs now. Furthermore that does not mean that partial load is supported for their implementation of key store.
    Finally if none matched your needs, you can write one security provider yourself. Reading the current keystore once (you hopefully have the passwords for all entries), write the entries in the new keystore file( in BER format) then write a logic (probably with caching) to offer transparent partial load in your keystore implementation; drop me some lines if you need more details or commercial consulting services on this.

  • How to deal with a large project (perhaps using the Daisy Trail Approach)?

    Hi,
    My intital problem was that I needed the screen to pause so that the user could interact with the spry menus and image maps. In other threads, I noticed that you added click boxes to prevent the screen from moving on. However, whenever I clicked on the spry menus or image maps, the screen moved on without giving me the chance to interact more on the screen. Furthermore, the screen faded away to white. I decided to use the timeline and make each slide last for at least 60 seconds. However, whenever I tried to do this for the entire project, it crashed, repetitively, The project has become too big - it is 164 slides.
    I read in another thread that you can break the project up using the 'Daisy Trail' Apprach. When one chunk of the project finishes, you can execute another swf file to open and so on until they all open in sequence. I am wondering, how does this approach work exactly and will I still be able to use spry menus, i.e. will I still be able to insert links to the other parts of the project? Or does this approach mean that I must create a web page in order to add the links of these separate swf files and will this also make my spry menus and design defunct?
    Any help on this would be greatly appreicated.
    Many thanks.

    Hi,
    If I can prevent the screen from fading away whenever it pauses, I think that would be a good start (as you suggested). I tried this before, having read other threads but am having the same problem - it still fades away. I have done the following:
    In Preferences, Defaults, I have commanded the objects to display for rest of slide, and for the effect to have no transition.
    In Prefererences, Start and End, I have deselected the fade on first and end slides.
    When I right-clicked on the actual slide, and tried to select Transition, No Transition, nothing happened. It wasn't even possible to select this option or the others from this menu. No tick was shown to show that it was selected.
    Is there anyting else I can do or something I'm doing wrong? Thanks for any help!

  • AVCHD: how to deal with the .MTS files in FCP 5 or iMovie 08

    I just bought a Canon HF10 AVCHD camcorder and stumbles into the difficult editing process with FCP 5. Here are my steps:
    - Conversion of the camcorder .MTS files with iMovie 8; they are then transformed into a .MOV sequence. A 14secondes sequence recorded in the highest quality (FXP, 17Mbps) is captured as a 28MB .MTS file by the camcorder, rendered as a 231MB .MOV file by iMovie 8, and it took 53 sec to process it !
    - I export the sequence as: Share >Export Final Cut XML…
    - In FCP 5: import XML…then I tried different modes: AIC 1080i60 (image is overstreched), AIC 720p30 (proportion is OK, but I captured in 1080…), AIC DV NTSC 48kHz (not HD…) BUT none of the resulting sequence is rendered, I still have the red bar above the timeline.
    For financial reason, I’d like to avoid to go to FCP 6, which seems to work natively with the AVCHD format.
    My questions are:
    - is there a way to reduce the size of the iMovie converted files: 900MB per minute ?
    - is there a way not to have to render the imported files in FCP ?
    Can anyone help me ?

    I have the same camera and the reality is you're going to go through that translation from AVCHD to some sort of Quicktime file format no matter which tool you use. I do it with FCP6 and it needs to do the same translation from the .mts files to (they recommend) ProRes files. AVCHD is a highly compressed format and when you move it to an editable form, the file(s) is going to get much larger. It is HiDef content after all which can be quite large.
    In my case, once I do the transfer to ProRes format nothing needs to be rendered in the FCP timeline unless I add some sort of filter or effect to it.
    The rendering problem you are describing is often caused by the Sequence setting not matching the content settings. This is automated in FCP6 where it detects the content format of the video you're trying to add and changes the sequence settings to match the content. Take a look at the sequence settings and see if they match the structure of the content and that should resolve the 'everything needs to be rendered' situation.

  • What's the deal with backing up files to a cd rw?

    I must be having a dummy problem. help files seem to be sending me in circles. To backup individual files to a cd rw in Windows, all I did was drag the file to the cd rw and I could later easily overwrite the file just by dragging the newer version in. It seems a big project in OS X, ending with the cd being closed, not allowing the file to be overwritten. Can anyone point me to a link on how to do this? sheesh...
    thanks,
    --h

    Thank you. About four hours after I wrote that I remembered in XP I need a third party app to do that and was wondering if that might be the deal.
    --h

  • Slowness interacting with CS5.5 file the deeper into the file you go

    I have a rather large CS5.5 document that doesn't initially start slow; but as I scroll deeper into the file, the cpu usage will spike and cause ID to become unresponsive. The technical specs of the iMac that ID runs on are well above what is required by ID and every other document opened in ID runs just fine. One factor that may play into this situation was that the original file was created using CS3 and opened directly into CS5.5. Unfortunately the original CS3 file no longer exists.
    I've tried breaking the file into smaller 80 page trunks but the problems persists. All links are in working order. Any ideas?

    ashmuehlba wrote:
    One factor that may play into this situation was that the original file was created using CS3 and opened directly into CS5.5. Unfortunately the original CS3 file no longer exists.
    That's potentially problematic, especially if this file behaves differently form other simialr files created in CS5.5. I'd try an export to .idml and see if it helps: Remove minor corruption by exporting

  • How to deal with multiple mail files?

    Here's the story; it's sad but true: After a failed iSync spirited away all of my emails, the Apple Care techie I spoke to a month ago wasn't able to fully resolve the problem. So, currently in my Home Library folder I have 2 Mail files: #1 is labeled "Mail" and #2 is "Mail Copy."
    "Mail" has one sub-folder called "Mailboxes" and four other folders for each of my four POP email accounts (there are also a number of other files that I'm ignoring for the moment like--Envelope Index, Signatures.plist, ect.). All mail from the past 4 weeks is in the folder labeled "Mail." But...
    The other folder in my home library--"Mail Copy"--contains 3 POP email accounts and each one has sub-folders that contain mailboxes with my emails that go back many months. I want to keep the email in both folders, "Mail" and "Mail Copy."
    Problem: How to get the older messages out of "Mail Copy" and back into the current "Mail" folder. Clearly, it would be a mistake to simple drag them from one location to the other, since the "Contents" files and ".plist" files wouldn't match.
    I would be extremely grateful for any guidance the Mail authorities who contribute to this forum might be able to give.
    Also, for the life of me, I cannot figure out where the emails actually reside. The "Mail" folder contains a subfolder entitled "Mailboxes" and it contains subfolders for each mailbox like "Action.mbox" and "Hold.mbox." Then, inside each .mbox folder are other files and usually one folder. Most of these files have names like "content_index," "Info.plist," "mbox," and "tableofcontents." And there is a folder labeled "Messages" and it contains numerous ".emix" files and one "tableofcontents file." My confusion begins when I open a particular POP account folder and find .mbox folders also located there and those folders also contain .emix files. I've searched the entire knowledge base, and I find no explanation anywhere of which files/folders are located where and why.
    Many thanks for your time and trouble, Jay

    The easiest way to combine the old folders with the new is to use the "Import" option in Mail's File menu. Consult the Mail Help topic "Importing email into Mail" for comprehensive instructions for doing that. In your case, the "archived" messages will be the ones in "Mail Copy."
    The actual messages are stored in individual *.emlx files, inside each mailbox folder's "Messages" folder. (For instance, you would find a message in a locally stored, root level mailbox "Action" at path ~/Library/Mail/Mailboxes/Action.mbox/Messages/xxx.emlx, where "xxx" is a number.)
    Some of the files you mention are leftovers from Mail 1.x & are not needed with Mail 2.x. They won't do any harm if left in place, but you can remove them if you wish. (I suggest making a backup of the entire ~/Library/Mail/ folder beforehand, in case something you need accidentally gets deleted.)
    These files are not needed by Mail 2.x: "content_index," all "tableofcontents" files, & files named "mbox." All folders with names ending in ".mbox" are needed, since they contain the .emlx files. You may also find folders in ~/Library/Mail/Mailboxes/ with no ".mbox" extension pared with folders of the same name with the ".mbox" extension (like "Hold" & "Hold".mbox") -- do not delete any of these, as they contain "Messages" folders as well.

Maybe you are looking for

  • New to Windows on a Mac - considerations

    Recently, my beloved, aging MacBook Pro died. In a hurry to get it replaced, I took the economy route and bought a refurbished late-2012-model Core i5 iMac directly from Apple. The new machine will have a 1 TB hard drive and 8 GB of RAM. The new iMac

  • 3TB Fusion and Bootcamp?

    Hi, I noticed this on the Fusion support page: Note: Boot Camp Assistant is not supported at this time on 3TB hard drive configurations. http://support.apple.com/kb/HT5446 I spend most of my time in Windows as opposed to Mac O/S.  Can I use the bootc

  • How can I connect to more than one computer from external ip

    Hi I have been using remote desktop for some time and have a port forward set so I can connect to our mail server from anywhere else externally from my macbook pro, but how can I connect to other macs in the office when there only seems to be the two

  • Replication Error with Vendor Master Records

    Hello, We are trying to replicate vendor master records from R/3 to SRM via BBPGETVD.  When we go to SLG1, there is an error stating Business Partner XXXX: Invalid Value 0003 for field Authorization Group. Currently, the AP dept. maintains the field

  • Open a txt file in a real time PXI target

    I want to open and read a txt file when targeting to a PXI, running a real time OS. The file is saved in a specific directory of the real time machine. When I run the typical Labview (version 7.1) examples there is alwasys an error message.