DW MX 2004 Slow with large files?

I'm using DW MX 2004 with a static website with a few
thousand files.
It's very slow when opening multiple files at the same time
or
opening a large html file.
But DW4 is fine, nice and quick compared with DW MX 2004.
Is there any way to help DW MX 2004 work better with these
files?
Many thanks, Craig.

Many thanks.
But already running the 7.0.1 update.
"Randy Edmunds" <[email protected]> wrote in
message
news:eggcl0$6ke$[email protected]..
> Be sure that you've installed the 7.0.1 updater. There
were a few
> performance fixes, especially on the Mac, that may help
your workflow.
>
> HTH,
> Randy
>
>
>> I'm using DW MX 2004 with a static website with a
few thousand files.
>> It's very slow when opening multiple files at the
same time or
>> opening a large html file.
>> But DW4 is fine, nice and quick compared with DW MX
2004.
>>
>> Is there any way to help DW MX 2004 work better with
these files?

Similar Messages

  • Slow with large file

    We are using Spry to build applications to show lots of old
    pictures and the story obout them in local Historical Societis
    around here in Sweden. To secure the information for the future we
    save in xml. We have one application with 2900 pictures but it is
    slow when changing catygories. Look at
    http://www.sockenbilder.se/forsa/
    Help please! Jan-Åke

    Hi,
    I've looked at your application.
    One thing I've noticed is the fact that you disabled the
    cache for the XMLDataSets. Is this really intended in your case? (I
    mean, is the XML file changing very often?) When using such large
    XML files, the use of cache is recommended in order to improve the
    speed of the application.
    If changing { useCache: true } is really not an acceptable
    solution for you, maybe you should try breaking up your XML file in
    separate files for each photo category (just like in our demo).
    Hope this helps you,
    Best regards,
    Dragos

  • Photoshop CS6 keeps freezing when I work with large files

    I've had problems with Photoshop CS6 freezing on me and giving me RAM and Scratch Disk alerts/warnings ever since I upgraded to Windows 8.  This usually only happens when I work with large files, however once I work with a large file, I can't seem to work with any file at all that day.  Today however I have received my first error in which Photoshop says that it has stopped working.  I thought that if I post this event info about the error, it might be of some help to someone to try to help me.  The log info is as follows:
    General info
    Faulting application name: Photoshop.exe, version: 13.1.2.0, time stamp: 0x50e86403
    Faulting module name: KERNELBASE.dll, version: 6.2.9200.16451, time stamp: 0x50988950
    Exception code: 0xe06d7363
    Fault offset: 0x00014b32
    Faulting process id: 0x1834
    Faulting application start time: 0x01ce6664ee6acc59
    Faulting application path: C:\Program Files (x86)\Adobe\Adobe Photoshop CS6\Photoshop.exe
    Faulting module path: C:\Windows\SYSTEM32\KERNELBASE.dll
    Report Id: 2e5de768-d259-11e2-be86-742f68828cd0
    Faulting package full name:
    Faulting package-relative application ID:
    I really hope to hear from someone soon, my job requires me to work with Photoshop every day and I run into errors and bugs almost constantly and all of the help I've received so far from people in my office doesn't seem to make much difference at all.  I'll be checking in regularly, so if you need any further details or need me to elaborate on anything, I should be able to get back to you fairly quickly.
    Thank you.

    Here you go Conroy.  These are probably a mess after various attempts at getting help.

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • Slow outgoing mail with large file

    I have a rather large file thats trying to go thri the outbox.
    I can't seem to stop it!
    I'm running OS X Lion using iCloud / Safari.
    Looking for advice

    Try clicking the window menu item in mail then select activity, you should see the progress of the file upload click on the red stop sign. 

  • Probs with large files in adapter engine

    Hi all,
    in a scenario I'm currently working on, I have to process files (from file adapter). These files have 1KB up to 15MB. So I set up the tuning parameters, that files with more than 2MB are queued in this special queue for large messages. This works fine. small messages are processed directly and the larger messages are queued and processed in sequence by the integration engine.
    My problem is now on the point where the integration engine sends this large files to the adapter engine. There are always 4 messages in parallel sent to the adapter engine and this slows down the whole system due to an additional self-developed module.
    So my question is: Can I restrict the sending of the messages to only 1 at a time from integration engine to adapter engine?
    The time of processing is not important. I think I can handle this with EOIO, but is there an other way? Perhaps depending on the queue?
    Thanks
    Olli

    Hi William,
    thx for your reply.
    Yes I think it is the easiest way. But the problem is, if a message runs to a failure the others will not be processed until the failure is fixed or processing is canceled.
    So I hoped to find a solution where I can restrict the sending to the adapter engine to one message at a time in order to not affect the processing of the other messages as in EOIO.
    Regards
    Olli

  • ITunes Extremely Slow with large library

    I recently inherited a huge amount of music (36,000 tracks/211 gigs) which I keep on a Time Capsule as an external drive connect to an Airport Express which I use as a bridge to my home stereo. iTunes is pointed to the TC as the music source and there is no music content on my Macbook's HD, only the iTunes library itself and the cover art. I believe I have the standard RAM (black Macbook, purchased June 2007, which I think is 1gig).
    iTunes is not extremely slow in cover flow, I get constant spinning balls and it takes quite a while for the program to close. Do I need more RAM (I think there is a 2gig max for my Macbook)? Is 36,000 tracks more than iTunes was designed to handle? Any chance storing the content on an external drive via WiFi affects the iTunes library response?

    I haven't had experience with newer Intel macs with this issue, so I'm a bit hesitant to try to answer, but here goes. This may not at ALL apply to what you're dealing with.
    The problem of incredibly slow behavior with large libraries in iTunes has been an issue since at LEAST 7.0 if not much earlier. I have over 130,000 tracks in my library, stored on a WD 1TB studio drive connected to my "home media server - a G4 Cube" via firewire and shared over ethernet to 3 macs. Do a search for "slow large library" in these forums and you will see NUMEROUS threads all complaining about various issues people seem to have when the number of tracks gets into the tens of thousands, much less over 100k.
    In any case, there are some things I've found that seem to help:
    1. Keep the actual library (database) files on boot drive. IF you have the music files on a separate drive, it helps to keep the iTunes Library folder on the internal drive with the coverart where it can be accessed faster. This is the database (xml) file that iTunes loads and saves when changes are made to the tags or tracks.
    2. Also, if you've not got a lot of playlists to worry about saving (i don't use playlists much, just play cds out of the browser mostly, or have some smart playlists).. it helps to rebuild the iTunes library from scratch at some point. I have no reason to believe this, but I assume it creates a new optimized database this way. (maybe someone can correct me on this... after say months of retagging tracks, playing music, creating and deleting playlists etc... does the database end up 'messier' OR does it correctly rewrite everything and optimize constantly or when you quit itunes?) What I DO know is that if you shorten the path to the music, it will be a smaller file. So don't make it "My Firewire Drive/Music Files/iTunes Files/CDs and MP3s/By Artists/*" this gets written as the path for EACH track in the library file that it then needs to load. Make it "Music/Artists/*" or something as short as you can. Then, what I did was to set iTunes to NOT copy music to the library folder, and starting with a blank iTunes dragged the icon of my shared music folder containing all the mp3 files into the iTunes window. It took overnight and the better part of the next day to read in all 130k files, copy the embedded coverart to the local folder, and the longest part, "determining gapless playback" --which i wish they'd give us a way to disable, takes forever! and i dont' use it (if anyone knows how to stop gapless scanning on a mac, let us know)... but fortunately you only have to do this once.
    3. More RAM. My main computer is an MDD Dual 1ghz G4. Until about a month ago I had a gig of ram, then I finally got around to upgrading to 2 gigs. It DEFINATELY made a big difference. Where before I'd see the rainbow ball for 30 seconds whenever I made playlists or changes to tags.. now I often don't see it at all, or certainly much less.
    I read not long ago an article talking about WHY iTunes was so bad at scaling up for larger libraries. The author suggested it was due to the library being saved out in an XML database and that unless Apple changes its backend to iTunes, it will not be possible to squeeze much more performance out of it. This probably makes it useless for DJs, radio stations, classical music lovers or anyone with large collections of music with many tracks. I assumed that at some point we'd see Apple rewrite iTunes to take advantage of the SQL database in the OS level, but who knows. I really don't think I understand database stuff at this level.
    **Finally... I HAVE noticed that recently it has seemed to be getting more responsive. On one of my other machines (an iMac 800mhz g4!) running iTunes used to be a frustrating proposition with constant spinning beachballs and hangs before it let you pick music or change tracks. Now its actually usable. I don't know when this got 'better' but I think with either 7.6, 7.6.1 or 7.6.2 they have done something to make it more responsive. At least in my experience. So thats a good development.
    But seriously.. do a search thru the forums and you'll find a lot more about this problem with large libraries.
    Goodluck.

  • CS6 very slow saving large files

    I have recently moved from PS CS5 to CS6 and have noticed what seems to be an increase in the amount of time it takes to save large files. At the moment I am working with a roughly 8GB .psb and to do a save takes about 20 minutes. For this reason I have had to disable the new autosave features, otherwise it just takes far too long.
    CS5 managed to save larger files, more quickly and with less memory available to it. Looking at system resources while Photoshop is trying to save, it is not using its full allocation of RAM specified in performance preferences and there is still space free on the primary scratch disc. The processor is barely being used and disc activity is minimal (Photoshop might be writing at 4mb/s max, often not at all though according to Windows).
    I am finding the new layer filtering system invaluable so would rather not go back to CS5. Is this is a known issue or is there something I can do to speed up the saving process?
    Thanks.

    Thanks for the quick replies.
    Noel: I did actually experiment with turing off 'maximize compatibility' and compression and it had both good and bad effects. On the plus side it did reduce the save time somewhat, to somewhere a little over 10 minutes. However it also had the effect of gobbling up ever more RAM while saving, only leaving me with a few hundred mb's during the save process. This is odd in itself as it actually used up more RAM than I had allocated in the preferences. The resulting file was also huge, almost 35 GB. Although total HD space isn't a problem, for backing up externally and sharing with others this would make things a real headache.
    Curt: I have the latest video driver and keep it as up to date as possible.
    Trevor: I am not saving to the same drive as the scratch dics, although my primary scratch disc does hold the OS as well (it's my only SSD). The secondary scratch disc is a normal mechanical drive, entirely separate from where the actual file is held. If during the save process my primary scratch disc was entirely filled I would be concerned that that was an issue, but it's not.
    Noel: I have 48 GB with Photoshop allowed access to about 44 of that. FYI my CPU's are dual Xeon X5660's and during the save process Photoshop isn't even using 1 full core i.e. less than 4% CPU time.

  • Working with Large files in Photoshop 10

    I am taking pictures with a 4X5 large format film camera and scanning them at 3,000 DPI, which is creating extremely large files. My goal is to take them into Photoshop Elements 10 to cleanup, edit, merge photos together and so on. The cleanup tools don't seem to work that well on large files. My end result is to be able to send these pictures out to be printed at large sizes up to 40X60. How can I work in this environment and get the best print results?

    You will need to work with 8bit files to get the benefit of all the editing tools in Elements.
    I would suggest resizing at resolution of 300ppi although you can use much lower resolutions for really large prints that will be viewed from a distance e.g. hung on a gallery wall.
    That should give you an image size of 12,000 x 18,000 pixels if the original aspect ratio is 2:3
    Use the top menu:
    Image >> Resize >> Image Size

  • Bug report - Finder crashes in Cover Flow mode with large files

    I just came back from the Apple store and was told that I have discovered another bug in Leopard. When in Cover Flow view in Finder and trying to browse directories with large ( multiple GB) files, Finder continually crashes and reboots, oftentimes with 2 new FInder windows.
    I created a new user on my MBP to remove the effect of any preferences and the problem repeated itself.
    Come up Apple... get on top of these bugs and issue some patches...
    Not the kind of new OS I expected from a top notch company like Apple.

    Ah... that'll be it then, they are 512 x 512. well i guess that bugs been hanging around for quite some time now, anyone got any ideas if anyone's trying to fix it? i guess making 128 x 128 icons wouldn't be the end of the world but it does seem like a step backwards.
    one thing that still confuses me... an icon file contains icons of all sizes, why does cover flow not select the right size icon to display?
    thanks for that info V.K., much obliged.
    regards, Pablo.

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • Mounting CIFS on MAC with large file support

    Dear All,
    We are having issues copying large files ( > 3.5 GB) from MAC to a CIFS Share (smb mounted on MAC) whereby the copy fails if files are larger than 3.5 GB in size and hence I was wondering if there is any special way to mount CIFS Shares (special option in the mount_smbfs command perhaps ) to support large file transfer?
    Currently we mount the share using the command below
    mount_smbfs //user@server/<share> /destinationdir_onMAC

    If you haven't already, I would suggest trying an evaluation of DAVE from Thursby Software. The eval is free, fully functional, and supported.
    DAVE is able to handle large file transfer without interruption or data loss when connecting to WIndows shared folders. If it turns out that it doesn't work as well as you like, you can easily remove it with the uninstaller.
    (And yes, I work for Thursby, and have supported DAVE since 1998)

  • Adobe Illustrator crashes with large files

    when i open illustrator to view or edit a larger file i get
    once i hit ok, this is what i get
    this particular file is 588MB and it is an iAI file
    the computer im running this on is spec out to the following
    Win7 Ult 64bit
    I7-3770K cpu
    32 gb ram
    WD 500 GB Veloci Raptor
    samsung ssd 840 evo 120 gb scratch drive
    paging file set up as  min 33068 max 98304
    i tried with and with out extra scratch disk, with paging file set and with paging file set to system managed.
    i have uninstalled ai and re installed same response

    A 588 MB AI File? From where? CS5 is 32bit and expanding such a large file may simply go way beyond the measly 3GB you have in AI plus if it was generated in another program there may be al lsorts of otehr issues at play. This may be a hopeless cause and require you to at least instal lCS6 or CC 64bit as a trial...
    Mylenium

  • CS3 Still Very Slow with psd files

    I'm still having terrible performance issues with CS3 and imported CMYK psd files containing spot colours. I have a 9MB psd file with two spot colours imported into a layer in Illustrator, with some illustrator line work on top. Just zooming in give me a 5 second screen redraw, everything is very sluggish and occasionally I get a crash. I still get these weird alert box saying "Generating Pixels" now and again when importing the psd files.
    G5 Dual 2.0GHz, 2.5GB RAM, OSX 10.4.11, Illustrator 13.02, Photoshop 10.0.1.
    I'm not having these issues on a different mac running CS2.

    I'm thinking it might be RAM, I have Photoshop and Illustrator open at the same time. I have around 100GB of scratch disk space.
    But weirdly the slowness is not apparent using CS2. Things are just dead slow with a Photoshop file placed in Illustrator CS3. These files are only 30MB, but contain 3 spot channels + CMYK. Even with preview mode switched off, it's still very slow and I mean everything slows down.
    There is a lot of disk activity going on, so maybe my hard disks on this 3 year old mac are just getting too long in the tooth.
    What drives me nuts is I remember doing the same kind of work in Illustrator 8 on a G4 running System 9 and everything was MUCH faster.
    Oh, and I'd better not mention how fast Freehand was for this kind of work...

  • IdcApache2Auth.so Compiled With Large File Support

    Hi, I'm installing UCM 10g on solaris 64 Bit plattform and Apache 2.0.63 , everything went fine until I update configuration in the httpd.conf file. When I query server status it seems to be ok:
    +./idcserver_query+
    Success checking Content Server  idc status. Status:  Running
    but in the apache error_log and I found the next error description:
    Content Server Apache filter detected a bad request_rec structure. This is possibly a problem with LFS (large file support). Bad request_rec: uri=NULL;
    Sizing information:
    sizeof(*r): 392
    +[int]sizeof(r->chunked): 4+
    +[apr_off_t]sizeof(r->clength): 4+
    +[unsigned]sizeof(r->expecting_100): 4+
    If the above size for r->clength is equal to 4, then this module
    was compiled without LFS, which is the default on Apache 1.3 and 2.0.
    Most likely, Apache was compiled with LFS, this has been seen with some
    stock builds of Apache. Please contact Support to obtain an alternate
    build of this module.
    When I search at My Oracle Support for suggestions about how to solve my problem I found a thread which basically says that Oracle ECM support team could give me a copy IdcApache2Auth.so compiled with LFS.
    What do you suggest me?
    Should I ask for ECM support team help? (If yes please tell me How can I do it)
    or should I update the apache web server to version 2.2 and use IdcApache22Auth.so wich is compiled with LFS?
    Thanks in advance, I hope you can help me.

    Hi ,
    Easiest approach would be to use Apache2.2 and the corresponding IdcApache22Auth.so file .
    Thanks
    Srinath

Maybe you are looking for

  • Running a java program from an icon

    I want to run my program from an icon on my desktop. I have a .bat file that I've built a shortcut to and it works.MY GUI program does display and run when I click on the icon. The problem is that the DOS window also shows up behind my GUI. Is there

  • Partitioning

    Hi, Can anyone tell me how you go about partitioning a fact table in 3.5? Andy

  • Pacman tries to satisfy dependency of an ignored package, and fails...

    sudo pacman -Suy :: Synchronizing package databases... core is up to date extra is up to date community is up to date --> Searching for upgradable repo packages... :: Starting full system upgrade... warning: awesome: ignoring package upgrade (3.3.4-1

  • Flash chart does not appear on the PC of a user

    Hi, I have an application which is working nicely. But suddenly, one user cannot see the flash charts on his PC. On my PC it works well, I can see the charts. I don't know why on his PC the flash chart do not appear. She is using IE 6. The applicatio

  • InDesign CS6 won't open files

    InDesign CS6 won't open some files. No error message, simply returns to previous 'Open...' view. I've tried restarting, rebooting, resetting my InDesign preferences and re-unzipping the original zip file. I can still open previously worked on files,