Movie very large (So Close yet so Far away)

Ok so im about seconds away of finishing up my feature film. I have the menu and its very large. The movie it self is 1 hour and 27 minutes. And there are 5 features leading to be about another 1 hour and 30 minutes.
I compressed everything but still the actual movie takes up 3GB of the disk. And with everything is about 5 GB.
How do i make al lof this fit in the quickest time possiable

Have you tried using Quick Time directly? You can export from there as MPEG2 as well... you just don't get the bells and whistles that you get with Compressor.
If you are likely to do a lot of this kind of work then I can really recommend that you invest in a different encoder as well - BitVice or MegaPEG.X are the two to use on a Mac. Both are a couple of hundred dollars or so... and well worth it!
About Compressor not starting - if you look in the Compressor forum there are umpteen references to this problem. For the most part they can be resolved by re-starting your machine, which in turn re-starts the qmaster service. However, you can also do this from Terminal if you need to. This from a post a long time ago... not from me, I add:
Launch Terminal (Application>Utilities>Terminal) and type this:
sudo /Library/StartupItems/Qmaster/Qmaster
Then hit return, then enter your administrative password. Then you should get a message about Qmaster services starting up.
Re-launch Compressor, see if it works.
Or
go to finder, go, go to folder
type /etc/
view as icons
locate "hostconfig" and COMMAND then click on fileand drag to desktop. To do this, you MUST press command FIRST then while holding it down click the file and drag to desktop – if you’ve clicked on the file first, the command click you will unselect the file. If you don't get it the first time, keep trying.
Once on the desktop, rename the file to "hostconfig.old" all lower case
COMMAND drag the newly named file back into the etc folder.
That's it! Restart your machine and start-up compressor.
If those two don't solve it, you'll need to hunt around for other solutions, I'm afraid.

Similar Messages

  • I received the error (in iCal on my iMac): "The server responded with an error". The error message is very large, and if there is a way to acknowledge and close it I can't find it. Because this error message is open, I can't do anything in iCal. Any sugge

    I received the error (in iCal on my iMac): "The server responded with an error". The error message is very large, and if there is a way to acknowledge and close it I can't find it. Because this error message is open, I can't do anything in iCal. Any suggestions on how I could kill this error message? Thanks.
    iMac, Mac OS X (10.7.2)
    Basically i tried to enter too much information into my calendar and it has crashed  now i can not get rid of the error message or use the calendar  can anyone help please

    did you find ou how to get rid of it i can't

  • Can I move my very large iPhoto app and photos from main HD to a larger secondary HD via drag and drop?

    Can I move my very large iPhoto app and photos from main HD to a larger secondary HD via drag and drop?

    Welcome to the Apple Discussions.
    1. Quit iPhoto
    2. Copy the iPhoto Library from your Pictures Folder to the External Disk.
    3. Hold down the option (or alt) key while launching iPhoto. From the resulting menu select 'Choose Library' and navigate to the new location. From that point on this will be the default location of your library.
    4. Test the library and when you're sure all is well, trash the one on your internal HD to free up space.
    Regards
    TD

  • Upload very large file to SharePoint Online

    Hi,
    I tried uploading very large files to SharePoint online via the REST API using JAVA. Uploading files with a size of ~160 mb works well, but if the filesize is between 400mb and 1,6 gb I either receive a Socket Exception (Connection reset by peer) or an error
    of SharePoint telling me "the security validation for the page is invalid". 
    So first of all I want to ask you, how to work around the security validation error. As far as I understand the token, which I have added to my X-RequestDigest header of the http post request seems to be expired (by default it expires after 1800 seconds).
    Uploading such large files is time consuming, so I can't change the token in my header while uploading the file. How can I extend the expiration time of the token (if it is possible)? Could it help to send post requests with the same token
    continuously while uploading the file and therefore prevent the token from expiration? Is there any other strategy to upload such large files which need much more than 1800 seconds for upload?
    Additionally any thoughts on the socket exception? It happens quite sporadically. So sometimes it happens after uploading 100 mb, and the other time I have already uploaded 1 gb.
    Thanks in advance!

    Hi,
    Thanks for the reply, the reason I'm looking into this is so users can migrate there files to SharePoint online/Office 365. The max file limit is 2gb, so thought I would try and cover this.
    I've looked into that link before, and when I try and use those end points ie StartUpload I get the following response
    {"error":{"code":"-1, System.NotImplementedException","message":{"lang":"en-US","value":"The
    method or operation is not implemented."}}}
    I can only presume is that these endpoints have not been implemented yet, even though the documentation says they only available for Office 365 which is what  I am using. 
    Also, the other strange thing is that I can actually upload a file of 512mb to the server as I can see it in the web UI. The problem I am experiencing is get a response, it hangs on this line until the TimeOut is reached.
    WebResponse wresp = wreq.GetResponse();
    I've tried flushing and closing the stream, sendchunked but still no success

  • Handling very large diagrams in Pages?

    I am writing a book that requires sometimes the use of large diagrams. These are vector-based diagrams (PDF). Originally, I planned to use iBooks Author and widgets to let the user zoom/pan/scroll and use other nice interactive stuff, but after having tried everything I have decided to give up on iBooks Author and iBooks for now because of its dismal handling of images (pixels only, low resolution, limited size only, etc.).
    I am planning to move my project over to Pages. Not having the 'interactive widget'  approach means I need some way to handle large images. I have been thinking about putting very large images in multiple times on different pages with different masks. Any other possible trick? Can I have documents with multiple page sizes? Do I need a trick like the one above or can an ePub book be zoomed/panned/scrolled, maybe using something different to read it than iBooks?

    Peter, that was indeed what I expected. But it turns out iBooks Author can take PDF, but iBooks cannot and iBooks Author renders them to low resolution images (probably PNG) when compiling the .ibook form the .iba.
    Even if you use PNG in the first place, the export function of iBooks Author (either to PDF or to iBook) create low resolution renders.
    The iBooks format is more a web-based format. The problem lies not in what iBooks Author can handle, but in how it compiles these to the iBooks format. It uses the same export function for PDF, making also PDF export ugly and low-res.
    iBooks Author has more drawbacks, for instance, if you have a picture and you change the image inside the picture, you can't. You have to teplace the entire picture. That process breaks all the links to the picture.
    iBooks Author / iBooks is by far not mature.

  • Very-large-scale searching in J2EE

    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue?

    Michael McNeil wrote:
    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
    query per page, with iterative SQL that selects the next N rows beyond what was
    selected last time. Eg:
    Let variable X be the highest key value you've seen so far. Initially it would
    be the lowest possible value.
    select * from mytable M
    where ... -- application-specific qualifications...
    and M.key >= X
    and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
    In English, this says, select all the qualifying rows higher than what I last saw, but
    only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
    the next 100).
    When processing this query, remember the highest key value you see, and use it for the
    next query.
    Joe

  • Read very very large excel file into 2d array (strings or varient)

    Hi all,
    Long time user, first time poster on these boards.
    Looking at the copius amounts of great info related to reading Excel data from .xls files into labview, i've found that every one i've found from various people use the ActiveX method (WorkSheet.Range) which two strings are passed, namely excel's LetterNumber format to specify start and end.
    However, this function does not work when trying to query huge amounts of information. The error returned is "Type Mismatch, -2147352571" I have a very large excel sheet i need to read data from and then close the excel file (original file remains unchanged). However this file is gigantic (don't ask me, I didn't make it and I can't convince them to use something more appropriate) with over 165 columns at 1000 rows of data.I can read a large number of columns, but only a handful of rows, or vice versa.
    Aside from creating a loop to open and close the excel file over and
    over reading pieces of it at a time, is there a better way to read more
    data using ActiveX? Attached is code uploaded by others (with very minor modification) as an example.
    Thanks,
    Attachments:
    Excel Get Data Specified Field (1-46col).vi ‏23 KB

    Hi Maddox731,
    I've only had a very quick glance through your thread, and I must admit I haven't really thought it through properly yet. Sounds like you've come up with your own solution anyway. That said I thought I'd take a bit of a scatter gun approach and attach some stuff for you regradless. Please forgive my bluntness.
    You'll find my ActiveX Excel worksheet reader, which may or may not contain the problem you've come across. I've never tried it with the data size you are dealing with, so no promises. I've also attached my ADO/SQL approach to the problem. This was something I moved onto when I realised the limitations of AX. One thing I have noticed is that ADO/SQL is much faster than AX, so there may be some gains for you there with large data sets if you can implement it.
    I should add that I'm a novice to all this and my efforts are down to bits I've gleamed from MSDN and others' LV examples. I hope it's of some use, if only to spark discussion. Your ctiticism is more than welcome, good or bad.
    Regards.
    Attachments:
    Database Table Reading Stuff.zip ‏119 KB

  • Slow Performance or XDP File size very large

    There have been a few reports of people having slow performance in their forms (tyically for Dynamic forms) or file sizes of XDP files being very large.
    These are the symptoms of a problem with cut and paste in Designer where a Process Instruction (PI) used to control how Designer displays a specific palette is repeated many many times. If you look in your XDP source and see this line repeated more than once then you have the issue:
    The problem has been resolved by applying a style sheet to the XDP and removing the instruction (until now). A patch has been released that will fix the cut and paste issue as well as repair your templates when you open them in a designer with the patch applied.
    Here is a blog entry that describes the patch as well as where to get it.
    http://blogs.adobe.com/livecycle/2009/03/post.html

    My XDP file grow up to 145mb before i decided to see what was actually happening.
    It appears that the LvieCycle Designer ES program sometimes writes alot of redundant data... the same line millions of times over & over again.
    I wrote this small java program which reduced the size up to 111KB !!!!!!!!!!!!!!!!!! (wow what a bug that must have been!!!)
    Here's the sourcecode:
    import java.io.BufferedReader;
    import java.io.BufferedWriter;
    import java.io.FileNotFoundException;
    import java.io.FileReader;
    import java.io.FileWriter;
    import java.io.IOException;
    public class MakeSmaller {
    private static final String DELETE_STRING = "                           <?templateDesigner StyleID aped3?>";
    public static void main(String... args) {
      BufferedReader br = null;
      BufferedWriter bw = null;
      try {
       br = new BufferedReader(new FileReader(args[0]));
       bw = new BufferedWriter(new BufferedWriter(new FileWriter(args[0] + ".small")));
       String line = null;
       boolean firstOccurence = true;
       while((line = br.readLine()) != null) {
        if (line.equals(DELETE_STRING)) {
         if (firstOccurence) {
          bw.write(line + "\n");
          firstOccurence = false;
        } else {
         bw.write(line + "\n");
         firstOccurence = true;
      } catch (FileNotFoundException e) {
       e.printStackTrace();
      } catch (IOException e) {
       e.printStackTrace();
      } finally {
       if (br != null) {
        try {
         br.close();
        } catch (IOException e) {
         e.printStackTrace();
       if (bw != null) {
        try {
         bw.close();
        } catch (IOException e) {
         e.printStackTrace();
    File that gets generated is the same as the xdp file (same location) but gets the extension .small. Just in case something goes wrong the original file is NOT modified as you can see in the source code. And yes Designer REALLY wrote that line like a gazillion times in the .xdp file (shame on the programmers!!)
    You can also see that i also write the first occurrence to the small file just in case its needed...

  • When writing an email the letters are so small that I can hardly read them. If I increase the size then the outgoing email letters are very large.

    letters are very large. I have tried changing the font but it doesn't help. This is very annoying. How can I remedy this?

    I use these settings, so try them:
    Tools > Options > Display > Formatting tab
    or
    Menu icon > Options > Options > Display > Formatting tab
    * eg: Default font: Arial
    * size 14
    Note: this sets the default font to be used in both display of messages and the font to use as default in composing messages.
    Under Plain text Messages
    * select: Style : Regular Size: Regular and colour Black
    * Click on 'Advanced' button
    * make sure all sizes - 14
    * select; 'allow messages to use other fonts'
    * Select; 'Use fixed width fonts for plain text messages'
    * click on OK
    Note: This will use the default eg: Arial to be used, but allows the display of messages to use other fonts if the sender specified them.
    then at top click on 'Composition' > General tab
    note: This sets the default composing text colour and background colour.
    under HTML
    * Font: Variable width
    * Size : Medium
    * Select a text colour for writing emails eg: Blue
    * Select background colur eg: white
    click on OK to save and close Options.

  • The timeline  and cursor move very jerky (in a continual chattery motion) in Windows 8.1

    in my project the timeline and/or the cursor move very jerky (in a continual chattery motion )  in windows 8.1
    i have set the Windows Software Compatibility setting for their "Window 7 emulator"  and the condition still persists.
    also the "fullscreen playback"  for the project used to  be very fine resolution and now it is very mushy...
    all  this tells me there are real problem with my Premier Element 10

    Littlefeather
    You are running Premiere Elements 10 on Windows 8.1 64 bit. The program works just fine in Windows 8.1 64 bit when run Windows 8.1.
    So do not go altering to another operating system. I know this first hand.
    The immediate factor that needs ruling in or out is your video card/graphics card. Specifically "Does your Windows 8.1 64 bit use
    a NVIDIA GeForce card? If so, that may be your answer. See Announcement at the top of this forum which describe the Premiere
    Elements 10 NVIDIA GeForce problem and gives the fix of how to roll back the card's driver version to get Premiere Elements 10
    to work for you.
    Once we are assured that you are not afflicted by Premiere Elements 10 NVIDIA GeForce syndrome and also know that your current
    video card/graphics card driver is up to date, then we can move on to seek other factors that may be causing the problem.
    For your convenience, the following is a copy/paste of the Announcement about the issue as it appears at the top of this forum.
    All the links and how tos that you will need are in this announcement.
    Premiere Elements 10 NVIDIA Video Card Driver Roll Back
    If you are a Premiere Elements 10 user whose Windows computer uses a NVIDIA GeForce video card and you are experiencing
    Premiere Elements 10 display and/or unexplained program behavior, then your first line of troubleshooting needs to be rolling
    back the video card driver version instead of assuring that it is up to date.
    Since October 2013 to the present, there have been a growing number of reports about display and unexplained workflow glitches
    specific to the Premiere Elements 10 user whose Windows computer has a NVIDIA GeForce video card. If this applies to you, then
    the “user to user” remedy is to roll back the NVIDIA GeForce video card driver as far as is necessary to get rid of the problems. The
    typical driver roll back has gone back as far as March – July 2013 in order to get a working Premiere Elements 10. Neither NVIDIA
    nor Adobe has taken any corrective action in this regard to date, and none is expected moving forward.
    Since October 2013, the following thread has tried to keep up with the Premiere Elements 10 NVIDIA reports
    http://forums.adobe.com/thread/1317675
    Older NVIDIA GeForce drivers can be found
    http://www.nvidia.com/Download/Find.aspx?lang=en-us
    A February 2014 overview of the situation as well as how to use the older NVIDIA GeForce drivers for the driver roll back can be found
    http://atr935.blogspot.com/2014/02/pe10-nvidia-video-card-roll-back.html
    Please do not hesitate to ask if you need clarification on any of this information.
    We will be watching for your reply.
    Thank you.
    ATR

  • Help!! Flash CS3 and CS4 "Test Movie" very slow on OS X

    This is a problem that, having read many forums, affects a
    very large amount of people, though Adobe doesn't care at all. It
    only affects OS X users. I know that it has been addressed many
    times in different forums, but I never actually stumbled across
    anyone having found a solution.
    When I use Flash CS3, and I make any animation, even the
    simplest tween, and I preview it with Test Movie, the result I get
    is an extremely slow playback. Something like half the FPS it
    should be.
    However, when I export the SWF and preview it in the external
    Flash Player or in a browser, it's just fine and fast.
    Another interesting thing is that in CS3, when I open the
    Help panel, the problem with Test Movie only happens like 20% of
    the time. In that case, it only gets solved if I restart or Log Out
    at least. I have no idea why the Help panel being open solves the
    problem, this only shows that this is probably a little graphic
    user interface bug, or something similar, that could be solved very
    very easily.
    In Flash CS4, there is no Help panel, so there is no solution
    to the problem.
    It would be nice to be able to press Cmd + Enter to see the
    movie, and not have to do File > Export > bla bla bla, open
    Finder, Find the SWF, double click it, wait for the browser to
    open... etc...
    I have a brand new 2.5 GHz MacBook Pro, and Test Movie runs
    faster on my 900MHz Pentium III PC!! Funny...
    Here are some links I found about this problem:
    http://bugs.adobe.com/jira/browse/FP-878
    http://kb.adobe.com/selfservice/viewContent.do?externalId=kb407896
    http://www.kirupa.com/forum/archive/index.php/t-258991.html
    This is quite ridiculous, and on Adobe's Support page, the
    solution is "Do not use Test Movie."
    And the funny thing is that they didn't even bother to fix
    this in CS4...
    So basically if something doesn't work, Adobe's solution is
    "Don't use it."
    I guess they're right!
    Please, tell me if anyone has or does not have this problem
    or knows anything about it!
    Thanks,
    Mate

    This is a problem that, having read many forums, affects a
    very large amount of people, though Adobe doesn't care at all. It
    only affects OS X users. I know that it has been addressed many
    times in different forums, but I never actually stumbled across
    anyone having found a solution.
    When I use Flash CS3, and I make any animation, even the
    simplest tween, and I preview it with Test Movie, the result I get
    is an extremely slow playback. Something like half the FPS it
    should be.
    However, when I export the SWF and preview it in the external
    Flash Player or in a browser, it's just fine and fast.
    Another interesting thing is that in CS3, when I open the
    Help panel, the problem with Test Movie only happens like 20% of
    the time. In that case, it only gets solved if I restart or Log Out
    at least. I have no idea why the Help panel being open solves the
    problem, this only shows that this is probably a little graphic
    user interface bug, or something similar, that could be solved very
    very easily.
    In Flash CS4, there is no Help panel, so there is no solution
    to the problem.
    It would be nice to be able to press Cmd + Enter to see the
    movie, and not have to do File > Export > bla bla bla, open
    Finder, Find the SWF, double click it, wait for the browser to
    open... etc...
    I have a brand new 2.5 GHz MacBook Pro, and Test Movie runs
    faster on my 900MHz Pentium III PC!! Funny...
    Here are some links I found about this problem:
    http://bugs.adobe.com/jira/browse/FP-878
    http://kb.adobe.com/selfservice/viewContent.do?externalId=kb407896
    http://www.kirupa.com/forum/archive/index.php/t-258991.html
    This is quite ridiculous, and on Adobe's Support page, the
    solution is "Do not use Test Movie."
    And the funny thing is that they didn't even bother to fix
    this in CS4...
    So basically if something doesn't work, Adobe's solution is
    "Don't use it."
    I guess they're right!
    Please, tell me if anyone has or does not have this problem
    or knows anything about it!
    Thanks,
    Mate

  • Capture Image Of A Very Large JPanel

    Below is some code used to save an image of a JPanel to a file...
        int w = panel.getSize().width;
        int h = panel.getSize().height;
        BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
        Graphics graphics = image.getGraphics();
        // Make the component believe its visible and do its layout.
        panel.addNotify();
        panel.setVisible(true);
        panel.validate();
        // Draw the graphics.
        panel.print(graphics);
        // Write the image to a file.
        ImageFile imageFile = new ImageFile("test.png");
        imageFile.save(image);
        // Dispose of the graphics.
        graphics.dispose();This works fine but my problem is that I am trying to save what may be a very large JPanel, perhaps as large as 10000x10000 pixels. It doesn't take long for the java heap to be used up and an exception to be thrown.
    I know I can increase the heap size of the JVM but since I can't ever be sure how large the panel may be that's a far from ideal solution.
    So the question is how do I save an image of a very large JPanel to a file?

    1) Does the OoM happens while instantiating the buffered image, (which probably tries to allocate a big continuous native array of pixels)?
    Or the Graphics object (same reason, though the Graphics is probably just an empty shell over the big pixel array)?
    2) In which format do you need to save the image? Do you only need to be able to read it again in your own program?
    If yes to both questions, then a pulled-by-the-hair solution coud be to instantiate your own Graphics subclass (no Buffered Image), whose operations would save their arguments directly to the image file, instead of into a big in-memory model of the panel image.
    If the output format is a standard one though (GIF, JPG,...), then maybe your custom Graphics's operations could contain the logic to encode/compress as much as possible of the arguments into an in-memory bytearray of the target format?
    I'm not very confident though; I d'ont know the GIF or JPEG encoding, but I suspect (especially for JPEG) that you need to know the "whole" image to encode it properly.
    But if the target format supports encoders that work on the fly out of streams of bytes (e.g. BMP ) then you can use whatever compress/uncompress technique you see fit (e.g. RLE ): you know the nature of the panels, you may be aware of some optimizations you may perform wrt pixels storage. prior to encoding (e.g., bug empty areas, predictable chessboard pattern, black-and-white palette,...).
    Edited by: jduprez on Sep 19, 2009 7:33 PM

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • How do I control read start position in a very large file where start byte position may be larger than I32 (+/- 2^31)?

    Using LabView, I am trying to read a very large file which may be on the order of 2^32 bytes. I need to be able to step into the file at a byte position which may be greater than the I32 limit set by the read file.vi. Are there any options to the read file.vi or a method of circumventing this limitation?

    I'm not sure but i think that you can manage the "pos mode" in the "seek" sub-vi.
    The "pos mode" let you choose the initial position to add the numbers of bytes you want to move.
    I think that you can add a i32 number from the "initial" in the "pos mode" an latter use the "pos mode" in "current" to add another value. Then the next time you can move more than 2^31 bytes to the initial position.
    I hope you understand my idea, i wasn't try it before, but i think that would work.

Maybe you are looking for

  • Using one Cursor data in anothe rcursor

    Hi all, I have a requirement of processing the following. I have to select the rows for which few selected attributes are null. Then I need to take one more attribute for these rows and check that if the value of that row exists for any other row exc

  • What compression settings to use from finalcut4.5HD to quicktime to toast?

    I've been looking thru the forums and there are a lot of questions pertaining to this issue but I'm still having issues so here goes again...:) I'm trying to burn quicktime movie files that I created in Final Cut 4.5HD onto a data dvd. When the files

  • RE: forte-users-digest V1 #322

    Re: "We wish to eliminate any object references to the service object's partition. Any insight would be greatly appreciated." from Van Vuong <[email protected]> This was in regards to copying a set of object from a server to client. An implicit clon

  • Modify MSEG-INSMK during MIGO Save

    Hi friends, Any enhancement for this scenerio. Field in selection screen for MSEG-INMSK (Stock Type) is in display mode. I checked all enhancements and everywhere it is written that field to be modify should be in input mode. Any help. thanks Madan E

  • IMail group mail??!

    Hi everybody and tnxs for reading. Is there a way to create a contacts group so as I can send emails to all the people of that group by typing its name only? Tnx you. Merry Christmas