Printing memory and performance optimization

Hello,
I am using JVM 1.3 for a big Java Application.
Print Preview consumes 1.5MB of JVM's memory and performance is slow.
Please give your valuable ways to reduce memory usage and performance improvement
will be appreciated.
/* print method in ScrollablePanel extends JPanel */
     public int print(Graphics g, PageFormat pf, int pi) throws PrinterException
          double pageHeight = 0;
          double pageWidth = 0;
          Graphics2D g2 = (Graphics2D)g;
          pageWidth = pf.getImageableWidth();
          if (pi >= pagecount)
               return Printable.NO_SUCH_PAGE;
          g2.translate(pf.getImageableX(),pf.getImageableY());
// < Print Height manipultion>
          g2.setClip(0,(int)(startHeight[pi]), (int) pageWidth, (int)(endHeight[pi] - startHeight[pi]) );
          g2.scale(scaleX,scaleX);
          this.print(g2);
          g2.dispose();
          System.gc();
          return PAGE_EXISTS;
/* print preview */
private void pagePreview()
BufferedImage img = new BufferedImage(m_wPage, m_hPage, BufferedImage.TYPE_INT_ARGB);
Graphics g = img.getGraphics();
g.setColor(Color.white);
g.fillRect(0, 0, m_wPage, m_hPage);
target.print(g, pageFormat, pageIndex);
pp = new PagePreview(w, h, img); // pp is JPanel
g.dispose();
img.flush();
m_preview = new PreviewContainer(); //m_preview is JPanel
m_preview.add(pp);
ps = new JScrollPane(m_preview);
getContentPane().add(ps, BorderLayout.CENTER);
Best Regards,
Krish

Good day,
As I tried, there are two ways of doing printPreview.
To handle this problem, add only one page at a time.
To browse through the page use Prev Page, Nexe Page buttons in the toolbar.
1) BufferedImage - occupies memory .
Class PagePreview extends JPanel
public void paint(Graphics g) {
g.setColor(getBackground());
g.fillRect(0, 0, getWidth(), getHeight());
g.drawImage(m_img, 0, 0, this);
paintBorder(g);
This gives better performance, but consumes memory.
2) getPageGraphics in the preview panel . This occupies less memory, but re-paint the graphics everytime when paint(Graphics g) is called.
Class PagePreview extends JPanel
public void paint(Graphics g)
g.setColor(Color.white);
RepaintManager currentManager = RepaintManager.currentManager(this);
currentManager.setDoubleBufferingEnabled(false);
Graphics2D g2 = scrollPanel.getPageGraphics();
currentManager.setDoubleBufferingEnabled(true);
g2.dispose();
This addresses memory problem, but performance is better.
Is there any additional info from you?
Good Luck,
Kind Regards,
Krish

Similar Messages

  • Calculating the memory and performance of a oracle query

    Hi,
    I am now developing application in java with oracle as a back-end. In my application i require lot of queries to be executed. Hence, the system is getting is slow due to queries.
    So, i planned to develop one Stand-alone application in java, that should show the statistics like, memory and performance. for ex:- if i enter one SQL query in the text box, my standalone application should display, the processing time it requires to fetch the values and the memory is used for that query.
    Can anybody give ideas, suggestion, etc etc...
    Thanks in Advance
    Regards,
    Rajkumar

    This is now Oracle question, not JDBC question. :)
    Followings are sample for explain plan/autotrace/SQL*Trace.
    (You really need to read stuffs like Oracle SQL Tuning books...)
    SQL> create table a as select object_id, object_name from all_objects
    2 where rownum <= 100;
    Table created.
    SQL> create index a_idx on a(object_id);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'A');
    SQL>  explain plan for select from a where object_id = 1;*
    Explained.
    SQL> select from table(dbms_xplan.display());*
    PLAN_TABLE_OUTPUT
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    SQL> set autot on
    SQL> select * from a where object_id = 1;
    no rows selected
    Execution Plan
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    Statistics
    1 recursive calls
    0 db block gets
    1 consistent gets
    0 physical reads
    0 redo size
    395 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);
    -- SQL> alter session set events '10046 trace name context forever, level 12';
    -- SQL> alter session set sql_trace = true;
    PL/SQL procedure successfully completed.
    SQL> select * from a where object_id = 1;
    no rows selected
    * SQL> exec dbms_monitor.session_trace_disable(null, null);*
    -- SQL> alter session set events '10046 trace name context off';
    -- SQL> alter session set sql_trace = false;
    PL/SQL procedure successfully completed.
    SQL> show parameter user_dump_dest
    */home/oracle/admin/WASDB/udump*
    SQL>host
    JOSS:oracle:/home/oracle:!> cd /home/oracle/admin/WASDB/udump
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> ls -lrt
    -rw-r----- 1 oracle dba 2481 Oct 11 16:38 wasdb_ora_21745.trc
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> tkprof wasdb_ora_21745.trc trc.out
    TKPROF: Release 10.2.0.3.0 - Production on Thu Oct 11 16:40:44 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> vi trc.out
    select *
    from
    a where object_id = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 1 0 0
    total 3 0.00 0.00 0 1 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 55
    Rows Row Source Operation
    0 TABLE ACCESS BY INDEX ROWID A (cr=1 pr=0 pw=0 time=45 us)
    0 INDEX RANGE SCAN A_IDX (cr=1 pr=0 pw=0 time=39 us)(object id 65441)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 25.01 25.01
    Hope this helps

  • Memory and performance  when copying a sorted table to a standard table

    Hello,
    As you all probably know, it's not possible to use a sorted table as a tables parameter of a function module, but sometimes you want to use a sorted table in your function module for performance reasons, and at the end of the function module, you just copy it to a standard table to return to the calling program.
    The problem with this is that at that moment, the contents of the table is in memory twice, which could result in the well known STORAGE_PARAMETERS_WRONG_SET runtime exception.                                                                               
    I've been looking for ways to do this without using an excessive amount of memory and still being performant.  I tried four methods, all have their advantages and disadvantages, so I was hoping someone here could help me come up with the best way to do this.  Both memory and performance are an issue. 
    Requirements :
    - Memory usage must be as low as possible
    - Performance must be as high as possible
    - Method must work on all SAP versions from 4.6c and up
    So far I have tried 3 methods.
    I included a test report to this message, the output of this on my dev system is :
    Test report for memory usage of copying tables    
    table1[] = table2[]                                        
    Memory :    192,751  Kb                                    
    Runtime:    436,842            
    Loop using workarea (with delete from original table)      
    Memory :    196,797  Kb                                    
    Runtime:  1,312,839        
    Loop using field symbol (with delete from original table)  
    Memory :    196,766  Kb                                    
    Runtime:  1,295,009                                                                               
    The code of the program :
    I had some problems pasting the code here, so it can be found at [http://pastebin.com/f5e2848b5|http://pastebin.com/f5e2848b5]
    Thanks in advance for the help.
    Edited by: Dries Horions on Jun 19, 2009 1:23 PM
    Edited by: Dries Horions on Jun 19, 2009 1:39 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM

    I've had another idea:
    Create a RFC function like this (replace SOLI_TAB with your table types):
    FUNCTION Z_COPY_TABLE .
    *"*"Lokale Schnittstelle:
    *"  IMPORTING
    *"     VALUE(IT_IN) TYPE  SOLI_TAB
    *"  EXPORTING
    *"     VALUE(ET_OUT) TYPE  SOLI_TAB
    et_out[] = it_in[].
    ENDFUNCTION.
    and then try something like this in your program:
    DATA: gd_copy_done TYPE c LENGTH 1.
    DATA: gt_one TYPE soli_tab.
    DATA: gt_two TYPE soli_tab.
    PERFORM move_tables.
    FORM move_tables.
      CLEAR gd_copy_done.
      CALL FUNCTION 'Z_COPY_TABLE'
        STARTING NEW TASK 'ztest'
        PERFORMING copy_done ON END OF TASK
        EXPORTING
          it_in = gt_one[].
      CLEAR gt_one[].
      WAIT UNTIL gd_copy_done IS NOT INITIAL.
    ENDFORM.
    FORM copy_done USING ld_task TYPE clike.
      RECEIVE RESULTS FROM FUNCTION 'Z_COPY_TABLE'
       IMPORTING
         et_out        = gt_two[].
      gd_copy_done = 'X'.
    ENDFORM.
    Maybe this is a little bit faster than the Memory-Export?
    Edited by: Carsten Grafflage on Jul 20, 2009 11:06 AM

  • Profiling and performance optimization

    This subject realy sucks in the CLDC/j2me world. Profiling in the emulator is useless, and on the device, you simply can't do it. System.currentTimeMillis() might give you some hits, but is the overhead of printing this (in my case to a comm port) is to large for the testet function, you are out of options... What to do then?
    Well, with this topic, I would like to ask for some help om the possibilities of profiling j2me devices.
    I'm currently working on a compression algorithm te compress log data. I found one that doesn't use a lot of memory and managed to incorporate it in my CLDC programme: ( It uses Arithmatic compression, got it from http://www.mandala.co.uk/biac/index.html ).
    The problem is: Althouth the source is very compact and fairly simple, it still is very slow om my device (they claim it can do about 700 lines of java code/sec, but I think it is a bit more).
    Solution: optimize, but how if you don't know where to start? Well, the algorithm is simple enough to have a look at it, and optimize by trail and error... luckilly I remenberd some things I leaned on a cource about image manipulation (in C).
    So, next is a list of simple things you can do to to optimze your code, in sections where every byte of code executed is crucial:
    - try to find out what code is executed often or takes a lot of time; look at loops and calculations.
    - instead of x*2 use x<<1, instead of x/2 use x>>1, etc.
    - try to manually unroll loops (you might first try out how often the loop runs to test who for to unroll it
    - if the same calculation is used more than ones, just caclulate the value ones, and then just use that value in the other calculations in stread of caclulating it over and over. If those values are static, declare them static.
    - if possible manually inline short functions. You might even use a preprocessor (in cobination with ant or somthing) and use macro's
    - simplt try to do more with less code
    - if you often devide though a fixed number, try to multiply with the inverted (thisis not always possible in j2me because only integer math is available). This is faster on most devices.
    - Use array's if you have to acces lare amounts of data. Vectors and such are much slower.
    - Use compiler and obfuscator to optimize and shrink your code.
    Most of them are very obvious, but often you just forget about it.
    Well, all in all, my compression algorithm got at fair bit faster, but it is still to slow. I guess I'm going to tweak it a little more ;)
    Hope you guys can do something with it. If you have some more tips, just let me know!

    Good day,
    As I tried, there are two ways of doing printPreview.
    To handle this problem, add only one page at a time.
    To browse through the page use Prev Page, Nexe Page buttons in the toolbar.
    1) BufferedImage - occupies memory .
    Class PagePreview extends JPanel
    public void paint(Graphics g) {
    g.setColor(getBackground());
    g.fillRect(0, 0, getWidth(), getHeight());
    g.drawImage(m_img, 0, 0, this);
    paintBorder(g);
    This gives better performance, but consumes memory.
    2) getPageGraphics in the preview panel . This occupies less memory, but re-paint the graphics everytime when paint(Graphics g) is called.
    Class PagePreview extends JPanel
    public void paint(Graphics g)
    g.setColor(Color.white);
    RepaintManager currentManager = RepaintManager.currentManager(this);
    currentManager.setDoubleBufferingEnabled(false);
    Graphics2D g2 = scrollPanel.getPageGraphics();
    currentManager.setDoubleBufferingEnabled(true);
    g2.dispose();
    This addresses memory problem, but performance is better.
    Is there any additional info from you?
    Good Luck,
    Kind Regards,
    Krish

  • Memory and performance crashing 8.2.1

    Hi there,
    On both macOS and window XP the memory and perfomance profiler crashes the complee LV Dev Environment instantly
    once the start button is click.  I update to LV8.2.1f4 same thing.
    Also recently, after more than 5 years developing my application and in use during this period, 24 hours per day, with uptimes of 1- 2 months,  now after a certain number of measurement interations it crashes ( window xp  aplology meassage - ).
    The WinXP event viewer states that my LV app crashed in a third party ActiveX module.
    The ActiveX component has been used unchanged for more than 2 years so I doubt it is the problem, unless some window/lv update has caused it to hang.
    I have carefully examined my code for nto closing refs and file pointers and notification quese that never get emptied/read
    and i believe there are none, but of course i can never rule that out, but it is very minimal likely hood.
    if i watch the memory use of the application in Windows Task manger i do see the memery increase very slowly, but that could also be just more data accumalating - althohg it is not stored in arrays but immediatly to file.
    Any thoughts welcome.
    michael proctor
    Stanford University

    Michael,
    When LabVIEW crashes from running the performance and memory profiler, do you get the CPP error dialog the next time you start LabVIEW?  If so, I would suggest submitting an error report with it.  This will send us the error log file and help us determine what may have happened.
    As for your application, you say this has not been changed and was running for five years, correct?  Has anything else changed on the computer?  Did you recently add new software to the system? 
    How long does it take before the program crashes?  Is the time it takes reproduceable?  How long have you been seeing this behavior?  Did the crashes just suddenly start? 
    In certain cases, shared resource conflicts or corrupt files could cause problems like this.  For XP, this can often be solved by doing a system restore to the time before the crashes started happening.
    Regards,
    Craig D
    Applications Engineer
    National Instruments

  • Storing signature in Printer memory and accessing it in the sapscript

    Hi Gurus ,
    I have a requirement to upload signatures on to the printer memory (Its a HP 4250 printer for printing checks) and access the same to print them while printing the checks.
    Please let me know the steps to achieve the following
    1) Uploading the signatures to the printer and in which format - how should this be done.
    2) How to acces the signatures in the printer in the sapscript form
    Thanks
    Rupendra

    The only way that I know of that a signature can be stored directly on a printer is through the use of an add-on simm card.  Please look at OSS Note 18045, it references a company that will create a signature board for your printer.  One of the problems with this is that a new card has to be created if signatures change, we no longer use this method but use either RSTXLDMC to upload TIF images, or we use SE78.
    Thanks

  • Running out of cursors, memory and performance

    1. using xsql servlet .9.9.1 command-line with a query that involves cursor expressions, grouping and a join, i run into the following error:
    <ERROR>oracle.xml.sql.OracleXMLSQLException: ORA-00604: error occurred at recursive SQL level 1
    ORA-01000: maximum open cursors exceeded
    </ERROR>
    using a quick program with OracleXMLQuery from XSU 1.2, i get the same error with the same query.
    the database i'm testing is supposed to return 1 million+ rows; only 300 or so can be returned if i specify maxrows to be 300. anything beyond that gives me the cursor error.
    SQL*Plus handles the query just fine.
    2. if i take away the cursor expression in the query, and remove the maxrow spec, i get a java out of memory error.
    3. i can't enhance the performance via using OracleConnection object and wanting to use cursor expressions with XSU at the same time. i get a "protocol error" instead.
    So the questions are:
    1. how can i resolve the cursor issue for the result set size i'm trying to retrieve?
    2. how can i resolve the memory issue for the result set size i'm trying to retrieve? since XSU tries to load the entire result set into memory to build the DOM tree, is it possible to stream it out so that it doesn't have to be capped by memory limitations? if so, do you have sample code i can take a look at?
    3. is it possible to tune the oracle jdbc in conjunction of the use of XSU?
    null

    We are encountering the same problem. We are just inserting via http and no matter how we do things we end up running out of cursors. We are using the production version of the servlet. Have you been able to resolve your problem and do you know if there is a fix. Thanks for any information.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by alfred h:
    1. using xsql servlet .9.9.1 command-line with a query that involves cursor expressions, grouping and a join, i run into the following error:
    <ERROR>oracle.xml.sql.OracleXMLSQLException: ORA-00604: error occurred at recursive SQL level 1
    ORA-01000: maximum open cursors exceeded
    </ERROR>
    using a quick program with OracleXMLQuery from XSU 1.2, i get the same error with the same query.
    the database i'm testing is supposed to return 1 million+ rows; only 300 or so can be returned if i specify maxrows to be 300. anything beyond that gives me the cursor error.
    SQL*Plus handles the query just fine.
    2. if i take away the cursor expression in the query, and remove the maxrow spec, i get a java out of memory error.
    3. i can't enhance the performance via using OracleConnection object and wanting to use cursor expressions with XSU at the same time. i get a "protocol error" instead.
    So the questions are:
    1. how can i resolve the cursor issue for the result set size i'm trying to retrieve?
    2. how can i resolve the memory issue for the result set size i'm trying to retrieve? since XSU tries to load the entire result set into memory to build the DOM tree, is it possible to stream it out so that it doesn't have to be capped by memory limitations? if so, do you have sample code i can take a look at?
    3. is it possible to tune the oracle jdbc in conjunction of the use of XSU?
    <HR></BLOCKQUOTE>
    null

  • Infocube line item dimension and performance optimization

    Hi,
    I remodelled an infocube and line item dimension contains only one characteristics set as line item dimension.
    previously the dimension as one characteristics but it wasn't set as line item dimension.
    and when I checked the SAP_INFOCUBE_DESIGNS from SE38  it looks ok.
    /SAP/CUBE   /SAP/CUBE3   rows:        8663  ratio:          3  %
    After setting it as line item the rows is now minus but it is showing red which means that there is problem with the dimension
    /SAP/CUBE   /SAP/CUBE3   rows:          1-   ratio:          0  %
    Its this a performance problem since it is showing red.
    thanks

    hi,
    No,its not performance issue.
    for a dimension to be line item dimension...the dimension size shouldn't be more than 20% size of the fact table size.
    when a dimension is set as line item dimension,the regarding SID will be placed in Fact Table,but not the DIM ID.
    may be that is the reason when your dimension is not line item dimension,it shows the number of rows and when it is made line item dimension,its not showing any rows and the ratio also null.
    hope this is clear for you.
    Regards
    Ramsunder

  • Memory and Performance, and Arrays of Objects.

    I have a Class called ChannelObject. The Class has 15 to 20 primitive types. Each object of this class represents data read in from a file or received over a network. I don't know how many of these objects will be created, it could be anywhere between 0 and 5 million.
    Which would be a better way to store these objects?
    1. In an ArrayList(since it's dynamic and I can add to it when I have data, but I've read that an ArrayList isn't good when you're adding the same type of object) or
    2. A very large array(that could be mostly empty and wasted space)
    3. Or another option?
    I am concerned about the speed and memory of this program since it has the possibility of processing a great deal of data.
    Thank You in Advance!

    My ChannelObject Class has several primitive types.
    long time;
    float elevation;
    float meas1;
    float meas2;
    float meas3;
    float meas4;
    boolean statusFlag;
    ... There about 10 more variables.
    int crc;
    The Constructor for my Class takes a DataInputStream, checks the crc to verify the data, and then breaks the DataInputStream into the primitive types.
    Once I have read in the hole file or collected a day or more of data from the network, I want to scroll through the data and modify some of the data using a kalman filter and another smoothing filter. I receive a new data point once a second. This package is a building block for several other programs that will use it. Some of those programs will want to sue a days worth of data and some will want 3 months.
    The data will be in order of time(I hope), and there is the possibily that I will have no data from either a network outage or the site that sends us the data can crash.
    When scrolling through the data I will start at the first time(I'm assuming that's the first Object in my list), and using the initial states of my kalman filter, I will modify meas1 and meas2. Now that I have data I will update the constraints of the filter using this time and I will move onto the next time and repeat the process. Once I finish this run, I will have to do another run that will use my smoothing filter and modify meas3 and meas4. Both filters are dependent on time/order of the data.
    I hope I explained this well enough.
    Thanks
    PS: If anyone knows of a Kalman Filter class that's already been written, I'd appreciate not having to migrate my c version. Thanks Again!

  • Printer memory error

    When I try to print an XML Publisher report directly to a printer from the EBusiness Suite, I get the following error:
    PDF Status page
    PDF file not printed. 128 MB of memory is required to enable direct PDF printing.
    Does this mean that I need to increase the memory in my printer? Are printer specifications for XML Publisher documented anywhere?

    Yes,
    You need to increase your printer memory and make it 128MB (minimum).
    Because PDF direct printing is required minimum 128 MB.
    Thanks
    Ravi

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • Printer memory question

    I purchased a color laserjet mfp m177fw the other day. It is supposed to have 123 mb ram. In the set up there are only options for 16 or 32 mb. Where is the rest of the ram? The reason I ask is that during printing the printer prints 3 pages and stops fpr about 15 seconds then 2 more until done. I am using Windows 7 with 8 meg of ram. Just wondering if i am missing something.

    Hi @bill-wgm ,
    I see by your post that you want to know how to get more memory. I can help you.
    Please take a look at these two suggestions to try.
    Check in the printer drivers to see if you have the option for this model to increase the printer memory.
    Go to start, devices and printers, right click your printer, left click printer properties, click on Device Settings tab, go down to Installable options, select Printer Memory and from the drop down see what choices you have.
    You can also turn off the Archive Printing on the printer.
    Go to setup, services, archive printing, turn it off.
    If you need further assistance, just let me know.
    Have a nice day!
    Thank You.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
    Gemini02
    I work on behalf of HP

  • CS3 Optimize Rendering For Memory vs Performance

    Ok, the description of this option is pretty vague:
    "By default, Adobe Premiere Pro renders video using the maximum number of available processors, up to 16. However, some sequences, such as those containing high-resolution source video or still images, require large amounts of memory for the simultaneous rendering of multiple frames. These can force Adobe Premiere Pro to abort rendering and to give a Low Memory Warning alert. In these cases, you can maximize the available memory by changing the rendering optimization preference from Performance to Memory. Change this preference back to Performance when rendering no longer requires memory optimization."
    Now what I want to know is which will give you the fastest rendering times?
    Memory vs Performance. I have 4gb of memory and never have gotten the above error. But still, I would like to know which I should use.

    Nicholas,
    You don't say how many processors you have, but the rule of thumb I've heard is that you should have 2GB of memory installed for each processor to achieve best performance. Actually, that recommendation comes from Nucleo Pro, an After Effects plug-in for advanced multi-processor rendering, so it may or may not apply in your case.

  • Should I Optimize for "Memory" or "Performance" in Preferences?

    I've been rendering the timeline prior to export and finding that it renders in just over an hour for a 30 minute project. Then, almost miraculously, the MPEG2-DVD export only takes about two hours or so. This is with "Maximum Render Quality" selected in the Sequence Settings. When this option is selected, a pop-up warns that it is "highly recommended" to set "Optimize for Memory" in Preferences>General. I did this, and my timeline render increased exponentially (I estimate triple, since I stopped it after it ran for 1/2 an hour and it said there was still about two hours to go. So I am prefering the "Performance" setting for rendering the timeline.
    But export may be a different matter. I'd rather not experiment with this if I don't have to, so I'm asking if anyone knows if the Preferences>General should be set to "Optimize for Memory" for export since that may be different than rendering the timeline for some important reasons.
    This question is really about time and quality in the final MPEG2-DVD. Are either affected, one way or the other, by the various options for settings in both the timeline render and the export encode? In the past, I've always used Max Render Quality with Optimize set for Performance and never had any issues. This latest discovery of reducing my export time (maybe in the range of 80%+) by rendering the timeline first is tempting to continue since the final MPEG2-DVD quality appears identical to exporting without first rendering the timeline. I did do a test today exporting without rendering the timeline first (after deleting all the preview files) and that export took 4-1/4 hours, a net loss of about an hour.
    Thanks, everyone.
    Update on statement in paragraph one. Since writing this, I exported after deleting the preview files and using Optimize for Memory in Preferences>General. Total export time was 4:15.

    When it comes to exporting, what type of encoding you use greatly effects how much time it takes to render the file. For example i recently tried to export a 12 minute file. It takes me about 45 minutes in an AVI format but it takes over 8 hours in FLV format. (FLV is a poor example but none the less point can be made from this).
    When it comes to optimizing for memory vs performance....It all depends on what you have available on your computer. If your memory is un the range of say 2-4 GB and you're using a windows 7 or vista OS, it's probably in your best interest to optimize for memory. This allows a machine with less memory to render much smoother than it would if it were trying to render based off of a performance based setting.
    Sometimes what happens as a result of the performance setting is the program tries to render the video much quicker than what the memory your computer can allocate can tolerate. Try it out, it might help with some of the "skipping over frame" errors.
    Cheers,
    -MBTV

  • How can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

Maybe you are looking for

  • File to RFC Error com.sap.engine.interfaces.messaging.api.exception.Messagi

    Hi , I am doing File to RFC  Scenario. My file get picked and I able to view the audit log under RWB message monitoring. Here I am getting the error u201CTransmitting the message to endpoint dest://XI_INTEGRATION_SERVER using connection File_http://s

  • IPhoto won't burn DVDs

    iPhoto is refusing to burn DVDs-- it says it can't connect to the drive. But I burned a DVD in finder just fine. What can I do? I just got my macbook back with a new HD and did a TM restore, although I don't see why that would be related.

  • Is there a discount for educators with regards to a developers licence?

    Hi, i teach computer science, would like to start developing apps for iPhone with my students, was wondering if there was a discount available for teachers?

  • Help needed on implementing a custom control

    Hi Am developing a control which resembles like Flash timeline or windows movie maker timeline. It will have multiple nodes in vertical aligned and a slider. like =Node1==================|==== =Node2==================|==== =Node3==================|==

  • Problème lancement jeu riddick et terran conflict x3 sur imac 27 pouces...

    Bonjour, J'ai un soucis : lancement sur imac 27 pouces sous os X 10.6.3 de riddick... message d'erreur directement après le lancement : même problème avec le jeu terran conflit x3 Merci pour votre aide... Je soupçonne open gl ou bien des fichiers sys