Should I Optimize for "Memory" or "Performance" in Preferences?

I've been rendering the timeline prior to export and finding that it renders in just over an hour for a 30 minute project. Then, almost miraculously, the MPEG2-DVD export only takes about two hours or so. This is with "Maximum Render Quality" selected in the Sequence Settings. When this option is selected, a pop-up warns that it is "highly recommended" to set "Optimize for Memory" in Preferences>General. I did this, and my timeline render increased exponentially (I estimate triple, since I stopped it after it ran for 1/2 an hour and it said there was still about two hours to go. So I am prefering the "Performance" setting for rendering the timeline.
But export may be a different matter. I'd rather not experiment with this if I don't have to, so I'm asking if anyone knows if the Preferences>General should be set to "Optimize for Memory" for export since that may be different than rendering the timeline for some important reasons.
This question is really about time and quality in the final MPEG2-DVD. Are either affected, one way or the other, by the various options for settings in both the timeline render and the export encode? In the past, I've always used Max Render Quality with Optimize set for Performance and never had any issues. This latest discovery of reducing my export time (maybe in the range of 80%+) by rendering the timeline first is tempting to continue since the final MPEG2-DVD quality appears identical to exporting without first rendering the timeline. I did do a test today exporting without rendering the timeline first (after deleting all the preview files) and that export took 4-1/4 hours, a net loss of about an hour.
Thanks, everyone.
Update on statement in paragraph one. Since writing this, I exported after deleting the preview files and using Optimize for Memory in Preferences>General. Total export time was 4:15.

When it comes to exporting, what type of encoding you use greatly effects how much time it takes to render the file. For example i recently tried to export a 12 minute file. It takes me about 45 minutes in an AVI format but it takes over 8 hours in FLV format. (FLV is a poor example but none the less point can be made from this).
When it comes to optimizing for memory vs performance....It all depends on what you have available on your computer. If your memory is un the range of say 2-4 GB and you're using a windows 7 or vista OS, it's probably in your best interest to optimize for memory. This allows a machine with less memory to render much smoother than it would if it were trying to render based off of a performance based setting.
Sometimes what happens as a result of the performance setting is the program tries to render the video much quicker than what the memory your computer can allocate can tolerate. Try it out, it might help with some of the "skipping over frame" errors.
Cheers,
-MBTV

Similar Messages

  • CS3 Optimize Rendering For Memory vs Performance

    Ok, the description of this option is pretty vague:
    "By default, Adobe Premiere Pro renders video using the maximum number of available processors, up to 16. However, some sequences, such as those containing high-resolution source video or still images, require large amounts of memory for the simultaneous rendering of multiple frames. These can force Adobe Premiere Pro to abort rendering and to give a Low Memory Warning alert. In these cases, you can maximize the available memory by changing the rendering optimization preference from Performance to Memory. Change this preference back to Performance when rendering no longer requires memory optimization."
    Now what I want to know is which will give you the fastest rendering times?
    Memory vs Performance. I have 4gb of memory and never have gotten the above error. But still, I would like to know which I should use.

    Nicholas,
    You don't say how many processors you have, but the rule of thumb I've heard is that you should have 2GB of memory installed for each processor to achieve best performance. Actually, that recommendation comes from Nucleo Pro, an After Effects plug-in for advanced multi-processor rendering, so it may or may not apply in your case.

  • Which model of memory upgrade should I get for my MacBook Pro early 2011 model? I now have 2 - 2GB modules.

    Which model of memory upgrade should I get for my MacBook Pro early 2011 model? I now have 2 - 2GB modules.

    Installing RAM in a 2011-2012 MacBook Pro
    There is really only one way to install RAM into a 2011-2012 MacBook Pro and while there are hundreds of DIY videos online, I just like this one, found on YouTube, by “macmixing”.
    Note that there is a difference in the RAM that should be used in 2011 and 2012 models.
    2011 models must use:
    •204-pin PC-10600 (1333 MHz) DDR3 SO-DIMM
    And 2012 models must use:
    •204-pin PC3-12800 (1600 MHz) DDR3 SO-DIMM
    So here's the video:
    Remember that the 13", 15" and 17" 2011 models (early or late) and the 13" and 15" mid-2012 non-Retina models can handle 4GB, 8GB or (unofficially, but, believe me, it works) 16GB of RAM. There are, in my opinion, only two 100% Mac-compatible vendors out there: Crucial and OWC. I really can't recommend any other brands, even though it may work, as these are the only brands that I've personally used in quite a while. Also remember to stay away from any RAM that is a “value” brand - Macs are picky about RAM and often these value RAM modules just don’t work very well.
    Good luck,
    Clinton

  • What Indexes should be created for improve performance of the sql query

    Hello Admins
    One of my user is facing slow performance issue while running the below query. Can someone, please guide me for the same. I want to know, what indexes should be created to improve the performance of this query. Also what else can be done to achieve the same.
    SQL Query:-
    SELECT UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_NUMBER))),
    CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_NUMBER,
    CGSBI_SHIP_DIST_S_EXTRACT.PO_SHIPMENT_NUMBER,
    CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_NUMBER,
    CGSBI_SHIP_DIST_S_EXTRACT.DISTRIBUTION_DATE,
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_LINE_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PROJECT_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ACCOUNT_DISTRIBUTION_CODE))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORACLE_ACCOUNT_NUMBER))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.COMPONENT_CODE))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TRANSACTION_CURRENCY_CODE))),
    CGSBI_SHIP_DIST_S_EXTRACT.ORDER_QUANTITY, UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORDER_UOM))),
    CGSBI_SHIP_DIST_S_EXTRACT.UNIT_PRICE_TRX_CURRENCY,
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.EXPENSE_TYPE_INDICATOR))),
    CGSBI_SHIP_DIST_S_EXTRACT.SOR_ID,
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_CODE))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_DESC))),
    CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_LEAD_TIME,
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.UNSPSC_CODE))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.BUYER_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.REQUESTOR_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.APPROVER_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_SITE_ID))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_GSL_NUMBER))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SHIP_TO_LOCATION_CODE))),
    UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TASK_ID))),
    (LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_RELEASE_ID)))
    FROM
    CGSBI_SHIP_DIST_S_EXTRACT
    WHERE PO_NUMBER IS NOT NULL;
    I generated the explain plan for this query and found the following:-
    Explain Plan:-
    SQL> explain plan for SELECT UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_NUMBER))),
    2 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_NUMBER,
    3 CGSBI_SHIP_DIST_S_EXTRACT.PO_SHIPMENT_NUMBER,
    4 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_NUMBER,
    5 CGSBI_SHIP_DIST_S_EXTRACT.DISTRIBUTION_DATE,
    6 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_LINE_ID))),
    7 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PROJECT_ID))),
    8 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ACCOUNT_DISTRIBUTION_CODE))),
    9 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORACLE_ACCOUNT_NUMBER))),
    10 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.COMPONENT_CODE))),
    11 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TRANSACTION_CURRENCY_CODE))),
    12 CGSBI_SHIP_DIST_S_EXTRACT.ORDER_QUANTITY, UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORDER_UOM))),
    13 CGSBI_SHIP_DIST_S_EXTRACT.UNIT_PRICE_TRX_CURRENCY,
    14 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.EXPENSE_TYPE_INDICATOR))),
    15 CGSBI_SHIP_DIST_S_EXTRACT.SOR_ID,
    16 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_CODE))),
    17 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_DESC))),
    18 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_LEAD_TIME,
    19 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.UNSPSC_CODE))),
    20 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.BUYER_ID))),
    21 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.REQUESTOR_ID))),
    22 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.APPROVER_ID))),
    23 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_SITE_ID))),
    24 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_GSL_NUMBER))),
    25 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SHIP_TO_LOCATION_CODE))),
    26 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TASK_ID))),
    27 (LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_RELEASE_ID)))
    28 FROM
    29 CGSBI_SHIP_DIST_S_EXTRACT
    30 WHERE PO_NUMBER IS NOT NULL;
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3891180274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 77647 | 39M| 2006 (1)| 00:00:25 |
    |* 1 | TABLE ACCESS FULL| CGSBI_SHIP_DIST_S_EXTRACT | 77647 | 39M| 2006 (1)| 00:00:25 |
    Predicate Information (identified by operation id):
    1 - filter("PO_NUMBER" IS NOT NULL)
    13 rows selected.
    SQL>
    SQL>
    Kindly suggest on this...
    Thanks & Regards
    -Naveen Gangil
    Oracle DBA

    Rafi is correct. Since po_number is the filter column, the only chance you have for using an index to access the table is on that column. However, if there are few (or none) rows with null po_number, you will always have FTS. Does the table have a PK ( which probably consists of at least po_number, line_number )? If that is the case po_number could never be null and in which case you are dumping the whole table and no indexing scheme is going to improve the queri's performance. You might, repeat might, see performance improvement if you cleanse the data in the table ( to eliminate the need for UPPER(LTRIM)RTRIM())) ) before querying it so the the data does not have to be massaged before returning it.

  • Printing memory and performance optimization

    Hello,
    I am using JVM 1.3 for a big Java Application.
    Print Preview consumes 1.5MB of JVM's memory and performance is slow.
    Please give your valuable ways to reduce memory usage and performance improvement
    will be appreciated.
    /* print method in ScrollablePanel extends JPanel */
         public int print(Graphics g, PageFormat pf, int pi) throws PrinterException
              double pageHeight = 0;
              double pageWidth = 0;
              Graphics2D g2 = (Graphics2D)g;
              pageWidth = pf.getImageableWidth();
              if (pi >= pagecount)
                   return Printable.NO_SUCH_PAGE;
              g2.translate(pf.getImageableX(),pf.getImageableY());
    // < Print Height manipultion>
              g2.setClip(0,(int)(startHeight[pi]), (int) pageWidth, (int)(endHeight[pi] - startHeight[pi]) );
              g2.scale(scaleX,scaleX);
              this.print(g2);
              g2.dispose();
              System.gc();
              return PAGE_EXISTS;
    /* print preview */
    private void pagePreview()
    BufferedImage img = new BufferedImage(m_wPage, m_hPage, BufferedImage.TYPE_INT_ARGB);
    Graphics g = img.getGraphics();
    g.setColor(Color.white);
    g.fillRect(0, 0, m_wPage, m_hPage);
    target.print(g, pageFormat, pageIndex);
    pp = new PagePreview(w, h, img); // pp is JPanel
    g.dispose();
    img.flush();
    m_preview = new PreviewContainer(); //m_preview is JPanel
    m_preview.add(pp);
    ps = new JScrollPane(m_preview);
    getContentPane().add(ps, BorderLayout.CENTER);
    Best Regards,
    Krish

    Good day,
    As I tried, there are two ways of doing printPreview.
    To handle this problem, add only one page at a time.
    To browse through the page use Prev Page, Nexe Page buttons in the toolbar.
    1) BufferedImage - occupies memory .
    Class PagePreview extends JPanel
    public void paint(Graphics g) {
    g.setColor(getBackground());
    g.fillRect(0, 0, getWidth(), getHeight());
    g.drawImage(m_img, 0, 0, this);
    paintBorder(g);
    This gives better performance, but consumes memory.
    2) getPageGraphics in the preview panel . This occupies less memory, but re-paint the graphics everytime when paint(Graphics g) is called.
    Class PagePreview extends JPanel
    public void paint(Graphics g)
    g.setColor(Color.white);
    RepaintManager currentManager = RepaintManager.currentManager(this);
    currentManager.setDoubleBufferingEnabled(false);
    Graphics2D g2 = scrollPanel.getPageGraphics();
    currentManager.setDoubleBufferingEnabled(true);
    g2.dispose();
    This addresses memory problem, but performance is better.
    Is there any additional info from you?
    Good Luck,
    Kind Regards,
    Krish

  • Calculating the memory and performance of a oracle query

    Hi,
    I am now developing application in java with oracle as a back-end. In my application i require lot of queries to be executed. Hence, the system is getting is slow due to queries.
    So, i planned to develop one Stand-alone application in java, that should show the statistics like, memory and performance. for ex:- if i enter one SQL query in the text box, my standalone application should display, the processing time it requires to fetch the values and the memory is used for that query.
    Can anybody give ideas, suggestion, etc etc...
    Thanks in Advance
    Regards,
    Rajkumar

    This is now Oracle question, not JDBC question. :)
    Followings are sample for explain plan/autotrace/SQL*Trace.
    (You really need to read stuffs like Oracle SQL Tuning books...)
    SQL> create table a as select object_id, object_name from all_objects
    2 where rownum <= 100;
    Table created.
    SQL> create index a_idx on a(object_id);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'A');
    SQL>  explain plan for select from a where object_id = 1;*
    Explained.
    SQL> select from table(dbms_xplan.display());*
    PLAN_TABLE_OUTPUT
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    SQL> set autot on
    SQL> select * from a where object_id = 1;
    no rows selected
    Execution Plan
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    Statistics
    1 recursive calls
    0 db block gets
    1 consistent gets
    0 physical reads
    0 redo size
    395 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);
    -- SQL> alter session set events '10046 trace name context forever, level 12';
    -- SQL> alter session set sql_trace = true;
    PL/SQL procedure successfully completed.
    SQL> select * from a where object_id = 1;
    no rows selected
    * SQL> exec dbms_monitor.session_trace_disable(null, null);*
    -- SQL> alter session set events '10046 trace name context off';
    -- SQL> alter session set sql_trace = false;
    PL/SQL procedure successfully completed.
    SQL> show parameter user_dump_dest
    */home/oracle/admin/WASDB/udump*
    SQL>host
    JOSS:oracle:/home/oracle:!> cd /home/oracle/admin/WASDB/udump
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> ls -lrt
    -rw-r----- 1 oracle dba 2481 Oct 11 16:38 wasdb_ora_21745.trc
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> tkprof wasdb_ora_21745.trc trc.out
    TKPROF: Release 10.2.0.3.0 - Production on Thu Oct 11 16:40:44 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> vi trc.out
    select *
    from
    a where object_id = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 1 0 0
    total 3 0.00 0.00 0 1 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 55
    Rows Row Source Operation
    0 TABLE ACCESS BY INDEX ROWID A (cr=1 pr=0 pw=0 time=45 us)
    0 INDEX RANGE SCAN A_IDX (cr=1 pr=0 pw=0 time=39 us)(object id 65441)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 25.01 25.01
    Hope this helps

  • Hyper-V Resource Pools for Memory and CPU

    Hi all,
    I'm trying to understand the concepts and details of resource pools in Hyper-V in Windows Server 2012. It seems as if there is almost no documentation on all that. Perhaps somebody can support me here, maybe I've not seen some docs yet.
    So far, I learned that resource pools in their current implementation serve mainly for metering purposes. You can create pools per tenant and then group VM resources into those pools to facilitate resource metering per tenant. That is, you enable metering
    once per pool and get all the data necessary to bill that one customer for all their resources (without metering individual VMs). Is that correct?
    Furthermore, it seems to me that an ethernet pool goes one step further by providing an abstraction level for virtual switches. As far as I've understood you can add multiple vSwitches to a pool and then connect a VM to the pool. Hyper-V then decides which
    actual switch to use. This may be handy in a multi-host environment if vSwitches on different hosts use different names although they connect to the same network. Is that correct?
    So - talking about actually managing that stuff I've learned how to create a pool and how to add VHD locations and virtual switches to a pool. Enabling resource metering for a pool then collects usage data from all the resources inside that pool.
    But now: I can create a pool for memory and a pool for CPU. But I cannot add resources to those. Neither can I add a complete VM to a pool. Now I'm launching a VM that belongs to a customer whose resources I'm metering. How will Hyper-V know that it's
    supposed to collect data on CPU and memory usage for that VM?
    Am I missing something here? Or is pool-based metering only good for ethernet and VHD resources, and CPU and memory still need to be metered per VM?
    Thanks for clarification,
    Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

    Thank you for the links. I already knew those, and unfortunately they are not matching my question. Two of them are about Windows Server 2008/R2, and one only lists a WMI interface. What I'm after is a new feature in Windows Server 2012, and I need conceptional
    information.
    Thanks for the research anyway. I appreciate that a lot!
    In the meantime I've gotten quite far in my own research. See my entry above of January 7th. Some additions:
    In Windows Server 2012, Hyper-V resource pools are mainly for metering purposes. You cannot compare them to resource pools in VMware.
    A resource pool in Hyper-V (2012) facilitates resource metering and billing for VM usage especially in hosting scenarios. You can either measure resource usage for single VMs, or you can group existing resources (such as CPU power, RAM, virtual hard disk
    storage, Ethernet traffic) into pools. Those pools will mostly be assigned to one customer each. That way you can bill the customer for their resource usage in a given time period by just querying the customer's pool.
    Metering only collects aggregated data with one value per resource (i.e. overall CPU usage, maximum VHD storage, summed Ethernet traffic and so on). You can control the time period by explicitly resetting the counter at any given time (a day, a week, a
    month or what you like).
    There is no detailed data. The aggregate values serve as a basis for billing, not as monitoring data. If you need detailed monitoring data use Performance Monitor.
    There is currently only one type of resource pool that adds an abstraction layer to a virtualization farm, and that is the Ethernet type. You can use that type for metering, but you can also use it to group a number of virtual switches (that connect to
    the same network segment) and then a VM connected to that pool will automatically use an appropriate virtual switch from the pool. You need no longer worry about virtual switch names across multiple hosts as long as all equivalent virtual switches are
    added to the pool.
    While you can manage two types of pool resources in the GUI (VHD pools and Ethernet pools) you should only manage resource pools via PowerShell. Only there will you be able to control what happens. And only PowerShell provides a means to start, stop, and
    reset metering and query metering data.
    The process to use resource pools in Hyper-V (2012) in short:
    First create a new pool via PowerShell (New-VMResourcePool). (In case of a VHD pool you must specify the VHD storage paths to add to the pool in the moment you create the pool.)
    In case of an Ethernet pool add existing virtual switches to the pool (Add-VMSwitch).
    Reconfigure existing VMs that you want to measure so that they use resources from the pool. The PowerShell
    Set-VM* commands accept a parameter -ResourcePoolName to do that. Example:
    Set-VMMemory -VMName APP-02 -ResourcePoolName MyPool1
    Start measuring with Enable-VMResourceMetering.
    Query collected data as often as you need with Measure-VMResourcePool.
    Note that you should specify the pool resource type in the command to get reliable data (see my post above, Jan 7th).
    When a metering period (such as a week or a month) has passed, reset the counter to zero with
    Reset-VMResourceMetering.
    Hope that helps. I consider this the answer to my own question. ;)
    Here's some links I collected:
    http://itproctology.blogspot.ca/2012/12/hyper-v-resource-pool-introduction.html
    http://www.ms4u.info/2012/12/configure-ethernet-resource-pool-in.html
    http://blogs.technet.com/b/virtualization/archive/2012/08/16/introduction-to-resource-metering.aspx
    http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/1ce4e2b2-8fdd-4f16-8ab6-e1e1da6d07e3
    Best wishes, Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

  • Plain Explain  and s methods (tools) for  to improve Performance

    Hi
    How can I do to use Plain Explain and others methods for to impove performance ?
    Where can I to find tutorial about it ?
    thank you in advance

    Hi
    How can I do to use Plain Explain and others
    methods for to impove performance ?
    Internally there are potentially several hundred 'procedures' that can be assembled in different ways to access data. For example, when getting one row from a table, you could use an index or a full table scan.
    Explain Plan shows the [proposed] access path, or complete list of the procedures, in the order called, to do what the SQL statement is requesting.
    The objective with Explain Plan is to review the proposed access path and determine whether alternates, through the use of hints or statistics or indexes or materialized views, might be 'better'.
    You often use Wait analysis, through StatsPack, AWR/ADDM, TKProf, Trace, etc. to determine which SQL statement is likely causing a performance issue.
    >
    Where can I to find tutorial about it ?Ah ... the $64K question. If we only knew ...
    There are so many variables involved, that most tutorials are nearly useless. The common approach therefore is to read - a lot. And build up your own 'interpretation' of the reading.
    Personal suggestion is to read (in order)
    1) Oracle's Database Concepts manual (described some of 'how' this is happening)
    2) Oracle's Performance Tuning manual (describes more of 'how' as related to performance and also describes some of the approaches)
    3) Tom Kyte's latest book (has a lot of demos and 'proofs' about how specific things work)
    4) Don Burleson's Statspack book (shows how to set up and do some basic interpretation)
    5) Jonathan's book (how the optimizer works - tough reading, though)
    6_ any book by the Oak Table (http://oaktable.net)
    Beyond that is any book that contains the words 'Oracle' and 'Performance' in the title or description. BUT ... when reading, use truck-loads, not just grains, of salt.
    Verify everything. I have seen an incredible amount of mistakes ... make 'em mysellf all the time, so I tend to recognize them when I see them. Believe nothing unless you have proven it for yourself.. Even then, realize there are exceptions and boundary conditions and ibgs and patches and statistics and CPU and memory and disk speed issues that will change ehat you have proven.
    It's not hopeless. But it is a lot of work and effort. And well rewarded, if you decide to get serious.

  • ORACLE OBJECTS FOR OLE(OO4O) PERFORMANCE TUNING

    제품 : ORACLE SERVER
    작성날짜 : 1997-10-10
    ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
    의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
    그래서 튜닝을 위해서는
    Windows 3.1의 경우는 c:/windows/oraole.ini
    WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
    parameters를 수정해야 합니다.
    만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
    된 Help를 자세히 읽어보고 적용을 해야합니다.
    FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
    속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
    Tuning and Customization
    A number of working parameters of Oracle Objects for OLE can be
    customized. Access to these parameters is provided through the Oracle
    initialization file, by default named ORAOLE.INI.
    Each entry currently available in that file is described below. The location
    of the ORAOLE.INI file is specified by the ORAOLE environment variable.
    Note that this variable should specify a full pathname to the Oracle
    initialization file, which is not necessarily named ORAOLE.INI. If this
    environment variable is not set, or does not specify a valid file entry, then
    Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
    directory. If this file does not exist, all of the default values
    listed will apply.
    You can customize the following sections of the ORAOLE.INI file:
    [Cache Parameters]
    A cache consisting of temporary data files is created to manage amounts
    of data too large to be maintained exclusively in memory. This cache
    is needed primarily for dynaset objects, where, for example, a single
    LONG RAW column can contain more data than exists in physical
    (and virtual) emory.
    The default values have been chosen for simple test cases, running on a machine
    with limited Windows resources. Tuning with respect to your machine and
    applications is recommended.
    Note that the values specified below are for a single cache, and that a separate
    cache is allocated for each object that requires one. For example, if
    your application contains three dynaset objects, three independent data
    caches are constructed, each using resources as described below.
    SliceSize = 256 (default)
    This entry specifies the minimum number of bytes used to store a piece
    of data in the cache. Items smaller than this value are allocated the
    full SliceSize bytes for storage; items larger than this value are
    allocated an integral multiple of this space value. An example of an
    item to be stored is a field value of a dynaset.
    PerBlock = 16 (default)
    This entry specifies the number of Slices (described in the preceding
    entry) that are stored in a single block. A block is the minimum unit
    of memory or disk allocation used within the cache. Blocks are read
    from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
    size is 256 * 16 = 4096 bytes.
    CacheBlocks = 20 (default)
    This entry specifies the maximum number of blocks held in memory at any
    one time. As data is added to the cache, the number of used blocks
    grows until the value of CacheBlocks is reached. Previous blocks are
    swapped from memory to the cache temporary disk file to make room for
    more blocks. The blocks are swapped based upon recent usage. The total
    amount of memory used by the cache is calculated as the product of
    (SliceSize * PerBlock * CacheBlocks).
    Recommended Values: You may need to experiment to find optimal cache parameter
    values for your applications and machine environment. Here are some guidelines
    to keep in mind when selecting different values:
    The larger the (SliceSize * PerBlock) value, the more disk I/O is
    required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
    more likely it is that blocks will need to be swapped to or from disk.
    The larger the CacheBlocks value, the more memory is required, but the
    less likely it is that Swapping will be required.
    A reasonable experiment for determining optimal performance might
    proceed as follows:
    Keep the SliceSize >= 128 and vary PerBlock to give a range of block
    sizes from 1K through 8K.
    Vary the CacheBlocks value based upon available memory. Set it high
    enough to avoid disk I/O, but not so high that Windows begins swapping
    memory to disk.
    Gradually decrease the CacheBlocks value until performance degrades or
    you are satisfied with the memory usage. If performance drops off,
    increase the CacheBlocks value once again as needed to restore
    performance.
    [Fetch Parameters]
    FetchLimit = 20 (default)
    This entry specifies the number of elements of the array into which data
    is fetched from Oracle. If you change this value, all fetched values
    are immediately placed into the cache, and all data is retrieved from
    the cache. Therefore, you should create cache parameters such that all
    of the data in the fetch arrays can fit into cache memory. Otherwise,
    inefficiencies may result.
    Increasing the FetchLimit value reduces the number of fetches (calls
    to the database) calls and possibly the amount of network traffic.
    However, with each fetch, more rows must be processed before user
    operations can be performed. Increasing the FetchLimit increases
    memory requirements as well.
    FetchSize = 4096 (default)
    This entry specifies the size, in bytes, of the buffer (string) used for
    retrieved data. This buffer is used whenever a long or long raw column
    is initially retrieved.
    [General]
    TempFileDirectory = [Path]
    This entry provides one method for specifying disk drive and directory
    location for the temporary cache files. The files are created in the
    first legal directory path given by:
    1.The drive and directory specified by the TMP environment variable
    (this method takes precedence over all others);
    2.The drive and directory specified by this entry (TempFileDirectory)
    in the [general] section of the ORAOLE.INI file;
    3.The drive and directory specified by the TEMP environment variable; or
    4.The current working drive and directory.
    HelpFile = [Path and File Name]
    This entry specifies the full path (drive/path/filename) of the Oracle Objects
    for OLE help file as needed by the Oracle Data Control. If this entry cannot
    be located, the file ORACLEO.HLP is assumed to be in the directory where
    ORADC.VBX is located
    (normally \WINDOWS\SYSTEM).

    제품 : ORACLE SERVER
    작성날짜 : 1997-10-10
    ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
    의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
    그래서 튜닝을 위해서는
    Windows 3.1의 경우는 c:/windows/oraole.ini
    WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
    parameters를 수정해야 합니다.
    만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
    된 Help를 자세히 읽어보고 적용을 해야합니다.
    FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
    속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
    Tuning and Customization
    A number of working parameters of Oracle Objects for OLE can be
    customized. Access to these parameters is provided through the Oracle
    initialization file, by default named ORAOLE.INI.
    Each entry currently available in that file is described below. The location
    of the ORAOLE.INI file is specified by the ORAOLE environment variable.
    Note that this variable should specify a full pathname to the Oracle
    initialization file, which is not necessarily named ORAOLE.INI. If this
    environment variable is not set, or does not specify a valid file entry, then
    Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
    directory. If this file does not exist, all of the default values
    listed will apply.
    You can customize the following sections of the ORAOLE.INI file:
    [Cache Parameters]
    A cache consisting of temporary data files is created to manage amounts
    of data too large to be maintained exclusively in memory. This cache
    is needed primarily for dynaset objects, where, for example, a single
    LONG RAW column can contain more data than exists in physical
    (and virtual) emory.
    The default values have been chosen for simple test cases, running on a machine
    with limited Windows resources. Tuning with respect to your machine and
    applications is recommended.
    Note that the values specified below are for a single cache, and that a separate
    cache is allocated for each object that requires one. For example, if
    your application contains three dynaset objects, three independent data
    caches are constructed, each using resources as described below.
    SliceSize = 256 (default)
    This entry specifies the minimum number of bytes used to store a piece
    of data in the cache. Items smaller than this value are allocated the
    full SliceSize bytes for storage; items larger than this value are
    allocated an integral multiple of this space value. An example of an
    item to be stored is a field value of a dynaset.
    PerBlock = 16 (default)
    This entry specifies the number of Slices (described in the preceding
    entry) that are stored in a single block. A block is the minimum unit
    of memory or disk allocation used within the cache. Blocks are read
    from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
    size is 256 * 16 = 4096 bytes.
    CacheBlocks = 20 (default)
    This entry specifies the maximum number of blocks held in memory at any
    one time. As data is added to the cache, the number of used blocks
    grows until the value of CacheBlocks is reached. Previous blocks are
    swapped from memory to the cache temporary disk file to make room for
    more blocks. The blocks are swapped based upon recent usage. The total
    amount of memory used by the cache is calculated as the product of
    (SliceSize * PerBlock * CacheBlocks).
    Recommended Values: You may need to experiment to find optimal cache parameter
    values for your applications and machine environment. Here are some guidelines
    to keep in mind when selecting different values:
    The larger the (SliceSize * PerBlock) value, the more disk I/O is
    required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
    more likely it is that blocks will need to be swapped to or from disk.
    The larger the CacheBlocks value, the more memory is required, but the
    less likely it is that Swapping will be required.
    A reasonable experiment for determining optimal performance might
    proceed as follows:
    Keep the SliceSize >= 128 and vary PerBlock to give a range of block
    sizes from 1K through 8K.
    Vary the CacheBlocks value based upon available memory. Set it high
    enough to avoid disk I/O, but not so high that Windows begins swapping
    memory to disk.
    Gradually decrease the CacheBlocks value until performance degrades or
    you are satisfied with the memory usage. If performance drops off,
    increase the CacheBlocks value once again as needed to restore
    performance.
    [Fetch Parameters]
    FetchLimit = 20 (default)
    This entry specifies the number of elements of the array into which data
    is fetched from Oracle. If you change this value, all fetched values
    are immediately placed into the cache, and all data is retrieved from
    the cache. Therefore, you should create cache parameters such that all
    of the data in the fetch arrays can fit into cache memory. Otherwise,
    inefficiencies may result.
    Increasing the FetchLimit value reduces the number of fetches (calls
    to the database) calls and possibly the amount of network traffic.
    However, with each fetch, more rows must be processed before user
    operations can be performed. Increasing the FetchLimit increases
    memory requirements as well.
    FetchSize = 4096 (default)
    This entry specifies the size, in bytes, of the buffer (string) used for
    retrieved data. This buffer is used whenever a long or long raw column
    is initially retrieved.
    [General]
    TempFileDirectory = [Path]
    This entry provides one method for specifying disk drive and directory
    location for the temporary cache files. The files are created in the
    first legal directory path given by:
    1.The drive and directory specified by the TMP environment variable
    (this method takes precedence over all others);
    2.The drive and directory specified by this entry (TempFileDirectory)
    in the [general] section of the ORAOLE.INI file;
    3.The drive and directory specified by the TEMP environment variable; or
    4.The current working drive and directory.
    HelpFile = [Path and File Name]
    This entry specifies the full path (drive/path/filename) of the Oracle Objects
    for OLE help file as needed by the Oracle Data Control. If this entry cannot
    be located, the file ORACLEO.HLP is assumed to be in the directory where
    ORADC.VBX is located
    (normally \WINDOWS\SYSTEM).

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Unaccounted for Memory is too big and lead to Native Memory Issuse.

    In our server, after running 1 month, Unaccounted memory will increase up to 500m or higher. And the native memory will be big. So it lead to OOM. Below are one sample:
    j2eeapp:jhf1wl101:root > jrcmd 27398 print_memusage
    27398:
    [JRockit] memtrace is collecting data...
    [JRockit] *** 19th memory utilization report
    (all numbers are in kbytes)
    Total mapped ;;;;;;;5100644
    ; Total in-use ;;;;;;4038952
    ;; executable ;;;;; 75968
    ;;; java code ;;;; 23680; 31.2%
    ;;;; used ;;; 21833; 92.2%
    ;; shared modules (exec+ro+rw) ;;;;; 4858
    ;; guards ;;;;; 5928
    ;; readonly ;;;;; 0
    ;; rw-memory ;;;;;3986664
    ;;; Java-heap ;;;;3145728; 78.9%
    ;;; Stacks ;;;; 126050; 3.2%
    ;;; Native-memory ;;;; 714885; 17.9%
    ;;;; java-heap-overhead ;;; 99596
    ;;;; codegen memory ;;; 1088
    ;;;; classes ;;; 166656; 23.3%
    ;;;;; method bytecode ;; 13743
    ;;;;; method structs ;; 21987 (#281446)
    ;;;;; constantpool ;; 72105
    ;;;;; classblock ;; 7711
    ;;;;; class ;; 11900 (#21166)
    ;;;;; other classdata ;; 22950
    ;;;;; overhead ;; 114
    ;;;; threads ;;; 960; 0.1%
    ;;;; malloc:ed memory ;;; 81024; 11.3%
    ;;;;; codeinfo ;; 4815
    ;;;;; codeinfotrees ;; 2614
    ;;;;; exceptiontables ;; 1790
    ;;;;; metainfo/livemaptable ;; 24519
    ;;;;; codeblock structs ;; 20
    ;;;;; constants ;; 33
    ;;;;; livemap global tables ;; 8684
    ;;;;; callprof cache ;; 0
    ;;;;; paraminfo ;; 255 (#2929)
    ;;;;; strings ;; 24040 (#345745)
    ;;;;; strings(jstring) ;; 0
    ;;;;; typegraph ;; 10132
    ;;;;; interface implementor list ;; 260
    ;;;;; thread contexts ;; 598
    ;;;;; jar/zip memory ;; 12204
    ;;;;; native handle memory ;; 486
    ;;;; unaccounted for memory ;;; 366520; 51.3%;4.52
    ---------------------!!!

    >
    "No one is perfect - not even Mac OS X. If a program manages to lock up central processes, a restart will be needed."
    That first part of what you said there is indeed true.
    But a modern OS is designed to keep processes separated. An application crash
    SHOULD NOT
    require a complete shut-down and reboot of your system. Yes, the Log-Out/Log back in process might take awhile if you have a particularly bad application crunch, because the OS has detected that something went screwy and is checking to see that the user account is healthy enough to run, and may be fixing some things in the process.
    I've run all kinds of not-quite-polished software over the years since my adoption of OS X, and no matter how badly some of it performed nothing ever required me to reboot my system to restore operating health. Now, that's not to say I don't run system maintenance utilities which, after performing their routines, suggest or require a shutdown restart. I usually only do this if I've decided to delete the offending application from my system. (Sidebar: How diligent are you about maintaining the general health of your system through the regular practice of running preventative maintenance routines? Ramon may be along shortly to lay the boiler-plate on you about this :))
    Does Photoshop dig its hooks so deeply into the root level of the OS that it could cause the kind of problems you've had? I don't know for sure, but I'd guess that it's possible. And I'd suggest that, if wonky Photoshop behavior can be so bad that it
    requires
    the user to restart in order to regain operational health, then something is VERY wrong. And I'd go even further out on a limb to guess that this is a fault in Adobe's Photoshop coding, and not in Apple's OS coding.

  • Tools for measuring BDB performance

    Hi all,
    Are there any tools available for benchmarking the performance of BDB ?
    Also how to use the test suite available with BDB distribution ?
    Thanks,
    david.

    Hello,
    MVCC alleviates reader/writer contention at the cost of additional memory.
    When MVCC is not used, readers trying to read data on page X might block for a writer modifying content on page X. With MVCC, the readers will not block. The fact that readers are not blocked will result in better performance (throughput and/or response time).
    So, your test should have some "hot spots" which would create this contention. You should have transaction mix of readers and writers. Without MVCC, you'll see that readers block (resulting in lower throughput). With MVCC, you should see that readers continue, thus getting more transactions per second.
    As long as the rows that you store in Berkeley DB are smaller than a page size, my opinion is the row size doesn't matter (Berkeley DB does page level locking).
    As mentioned above, memory requirements for MVCC are higher, so to get the additional performance benefit, the 4.6.21 run will probably need a bigger cache.
    That's the general answer.
    This is an interesting exercise. Good luck!
    Warm regards.
    ashok

  • HT1758 Mac OSX has no more space available for memory

    Getting message:  Mac OSX start up disk has no more space available for memory.  Remove files from startup disk.
    How do I solve this problem?

    Empty the Trash if you haven't already done so. If you use iPhoto, empty its internal Trash as well:
    iPhoto ▹ Empty Trash
    Then reboot. That will temporarily free up some space.
    According to Apple documentation, you need at least 9 GB of available space on the startup volume (as shown in the Finder Info window) for normal operation. You also need enough space left over to allow for growth of your data. There is little or no performance advantage to having more available space than the minimum Apple recommends. Available storage space that you'll never use is wasted space.
    To locate large files, you can use Spotlight. That method may not find large folders that contain a lot of small files.
    You can more effectively use a tool such as OmniDiskSweeper (ODS) to explore your volume and find out what's taking up the space. You can also delete files with it, but don't do that unless you're sure that you know what you're deleting and that all data is safely backed up. That means you have multiple backups, not just one.
    Deleting files inside an iPhoto or Aperture library will corrupt the library. Any changes to a photo library must be made from within the application that created it. The same goes for Mail files.
    Proceed further only if the problem isn't solved by the above steps.
    ODS can't see the whole filesystem when you run it just by double-clicking; it only sees files that you have permission to read. To see everything, you have to run it as root.
    Back up all data now.
    Install ODS in the Applications folder as usual. Quit it if it's running.
    Triple-click the line of text below on this page to select it, then copy the selected text to the Clipboard (command-C):
    sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V). You'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. If you see a message that your username "is not in the sudoers file," then you're not logged in as an administrator.
    The application window will open, eventually showing all files in all folders. It may take some minutes for ODS to list all the files.
    I don't recommend that you make a habit of doing this. Don't delete anything while running ODS as root. If something needs to be deleted, make sure you know what it is and how it got there, and then delete it by other, safer, means. When in doubt, leave it alone or ask for guidance.
    When you're done with ODS, quit it and also quit Terminal.

  • Not able to optimize for You Tube

    So I have been using Adobe Premiere for almost a year and a half and the one thing that has bothered me is that no matter what I do I can't get full HD Video's (I am filming with a 720 camera) and it won't optimize for You Tube and will always look like this
    I have looked up several guides and no matter what I do it does not change any guides I might be missing or something wrong I would love to use the full potential that I can.

    AME has quite a number of YouTube presets ... one of 'em should pretty close to match expectations.
    Neil

  • What hazards should I watch for with X.3.9?

    Just got my powerbook back with a new hard disk. Apple returned it with X.3.9 pre-installed. I'm hesitant to leave it with x.3.9. Before definitely deciding to zero-format and start over with X.3.8 I was thinking about test driving X.3.9 for a few days.
    What X.3.9 specific bugs/problems should I look for?
    They also updated my
    iTunes to 6.0.2
    Quicktime to 7.0.4
    Safari to 1.3.2 (and keyboard shortcuts already seem to be broken)
    Mail to 1.3.11
    I know there were reports that made me decide to stick at X.3.8 but Google search and a search of this forum don't jog my memory as to why. I'm not planning to update to Tiger.

    Kappy, I know one must consider the universe of potential complainaints and not just who does the complaining. Nonetheless, I've always been one to let others work out the bugs. If I have no reason to update I sit tight. I discovered like 3 or 4 years after OS 9 was released that Apple was finally saying it was compatible with 604e chips (originally they said it wasn't). OS 8.6 worked well so I still use it in my PM7300.
    Also, when I walked in the door of the Apple store yesterday the tech I know was shocked another disk failed. I mistyped when I wrote it's the 3rd disk in the powerbook, I meant to say it was the 3rd one that failed in the powerbook. I'm now on disk 4 since last May when the original one failed. That's why Apple rushed the repair for me. I know these hard disk issues are bizzare.
    In contrast, my PowerMac's original disk lasted 6.5 years.
    I'm fanatical about maintenance. That's why I lost no data with this disk failure (or when my PM7300's died). The last OS X update horror story I personally experienced was when Software Updater installed old security patches over the X.3.8 combo; Apple Care confirmed that the installed patches shouldn't have been recommended by Software Updater. I did a clean (i.e. zero format install of the X.3.8 combo and ilife updates) and the OS was solid. It would still be in use if certain OEM drives weren't... less than reliable?
    Before that I was one of those people who's modem was toasted by X.3.4-X.3.6. Before that I had a powerbook that always tested fine for hardware and couldn't go 5 mins without crashing. One of the hard disks that died in my current powerbook lasted less than 24 hours. It upped and died when I zero formatted it. "Stressing it" with the erasure saved me the tons of trouble of restoring my data before it would have died anyway.
    Personally, I think you are over-reacting
    Once bit twice shy. Ever since the Apple Genius suggested it, I find doing the zero erase makes it far easier to determine a problem is hardware related and not just an OS gone bad.
    I would give what you have a chance before you remove it.
    I am trying out the X.3.9 that's installed. I might even keep itunes 6 if I go back to X.3.8: I've been using itunes 4.7. Safari OTOH is really annoying with it's broken/"improved" shortcuts. I'm just trying to get a handle on what sort of problems to be on the look for.
    I appreciate your insights on how it's working for you. Tiger's out of my budget so I'll be sticking with Panther.
    Message was edited by: Marlinespike to fix html formatting

Maybe you are looking for