CS3 Optimize Rendering For Memory vs Performance

Ok, the description of this option is pretty vague:
"By default, Adobe Premiere Pro renders video using the maximum number of available processors, up to 16. However, some sequences, such as those containing high-resolution source video or still images, require large amounts of memory for the simultaneous rendering of multiple frames. These can force Adobe Premiere Pro to abort rendering and to give a Low Memory Warning alert. In these cases, you can maximize the available memory by changing the rendering optimization preference from Performance to Memory. Change this preference back to Performance when rendering no longer requires memory optimization."
Now what I want to know is which will give you the fastest rendering times?
Memory vs Performance. I have 4gb of memory and never have gotten the above error. But still, I would like to know which I should use.

Nicholas,
You don't say how many processors you have, but the rule of thumb I've heard is that you should have 2GB of memory installed for each processor to achieve best performance. Actually, that recommendation comes from Nucleo Pro, an After Effects plug-in for advanced multi-processor rendering, so it may or may not apply in your case.

Similar Messages

  • Should I Optimize for "Memory" or "Performance" in Preferences?

    I've been rendering the timeline prior to export and finding that it renders in just over an hour for a 30 minute project. Then, almost miraculously, the MPEG2-DVD export only takes about two hours or so. This is with "Maximum Render Quality" selected in the Sequence Settings. When this option is selected, a pop-up warns that it is "highly recommended" to set "Optimize for Memory" in Preferences>General. I did this, and my timeline render increased exponentially (I estimate triple, since I stopped it after it ran for 1/2 an hour and it said there was still about two hours to go. So I am prefering the "Performance" setting for rendering the timeline.
    But export may be a different matter. I'd rather not experiment with this if I don't have to, so I'm asking if anyone knows if the Preferences>General should be set to "Optimize for Memory" for export since that may be different than rendering the timeline for some important reasons.
    This question is really about time and quality in the final MPEG2-DVD. Are either affected, one way or the other, by the various options for settings in both the timeline render and the export encode? In the past, I've always used Max Render Quality with Optimize set for Performance and never had any issues. This latest discovery of reducing my export time (maybe in the range of 80%+) by rendering the timeline first is tempting to continue since the final MPEG2-DVD quality appears identical to exporting without first rendering the timeline. I did do a test today exporting without rendering the timeline first (after deleting all the preview files) and that export took 4-1/4 hours, a net loss of about an hour.
    Thanks, everyone.
    Update on statement in paragraph one. Since writing this, I exported after deleting the preview files and using Optimize for Memory in Preferences>General. Total export time was 4:15.

    When it comes to exporting, what type of encoding you use greatly effects how much time it takes to render the file. For example i recently tried to export a 12 minute file. It takes me about 45 minutes in an AVI format but it takes over 8 hours in FLV format. (FLV is a poor example but none the less point can be made from this).
    When it comes to optimizing for memory vs performance....It all depends on what you have available on your computer. If your memory is un the range of say 2-4 GB and you're using a windows 7 or vista OS, it's probably in your best interest to optimize for memory. This allows a machine with less memory to render much smoother than it would if it were trying to render based off of a performance based setting.
    Sometimes what happens as a result of the performance setting is the program tries to render the video much quicker than what the memory your computer can allocate can tolerate. Try it out, it might help with some of the "skipping over frame" errors.
    Cheers,
    -MBTV

  • Dreamweaver CS3 instead CS4 for optimization on older hardware . . .

    Dreamweaver CS3 instead CS4 for optimization on older hardware . . .
    I do design for print and every year more and more customers have asked me if I would design their web site after I have designed their brochures and other collateral. For the last nine years I have never found the time to learn web production. I purchased Adobe PageMill and then upgraded to Adobe GoLive 5 but I have never found the time to learn these programs. But lately, requests for web design have persisted. I downloaded the trial version of Dreamweaver CS4 but it is too slow on my 1.25 GHz MDD Dual G4, running Mac OS 10.4.11.
    So I purchased the Dreamweaver CS3 upgrade, which was still available from Officemax.com. I also purchased "Dreamweaver CS3: The Missing Manual" by David McFarland. I'll be going through the tutorials on weekends trying to become functional with Dreamweaver.
    I'll be using it in WYSIWYG mode and not as a HTML coder.

    I would have been happy to make DWCS4 my first step up to Dreamweaver but with it not officially supported on G4 Macs and all the menus at the top running way too slow, I needed to stick with what is officially supported for the G4 by Adobe, Dreamweaver CS3.
    It is possible that future updates of DWCS4 will be more optimized and do OK on my G4. It is also possible that running my outdated version of Font Reserve 3.1.4, under Mac OS 10.4.11, is slowing down the menus on DWCS4 on my machine even Acrobat 7 Pro gives me slow menus with my current set up.
    InDesign CS3 is fast enough on my system and IDCS4 runs just as fast as IDCS3 on my Dual MDD G4.
    I don't think I will be getting an Intel-Mac desktop and laptop until 18 months from now.

  • Modifying Memory Optimization parameter for BPEL process in SOA 11g

    Hello
    I have turned on memory optimization parameter for my BPEL process in the composite.xml (11g)
    this is what I have in composite.xml:
    <property name="bpel.config.inMemoryOptimization">false</property>
    How do we modify this parameter in the EM console at runtime? I changed this property to "true" using the System MBean browser, but it wasn't taking effect. I thought the SOA server must be restarted (similar to what we used to do in 10g). But when I restart the SOA server, the parameter goes back to whatever the value was in the composite.xml ignoring the change I made in the System MBean browser
    Please share your thoughts.
    Thanks in advance.
    Raja

    Deploying a newer version is not an option, as the endpoints could change (not sure if it would in 11g, but in 10g it does) and also, our service consumers will be pointing to the older version.As mentioned above, if clients are using URL without version then call will be forwarded to default version of composite internally. No manual tweaking required for this. Just make sure that while deploying the new version you are marking it as default.
    Besides, we report on service metrics and having multiple versions just complicates things.Not at all. If you are not using versioning feature, you are really under utilizing the Oracle SOA 11g. Remember that metrics can be collected for a single composite with same effort, irrespective of the number of composite versions deployed. Only few product tables refer the version while storing composite name and rest all use only the composite name without version. I do not know how you are collecting service metrics but we use DB jobs for same it works perfectly with any number of composites having multiple versions deployed.
    The idea is to do some debugging and collect audit trail in case there is a production issue by disabling inMemoryOptimization parameter. This is a live production environment and deploying whenever we want is not even an option for us, unfortunately. Why not debug by increasing log level. Diagnostic logs are the best option to debug an issue even in production. For getting audit trail you may re-produce the issue in lower environments. I think no organization will allow re-deployments just for debugging some issue in production until and unless it is too critical issue to handle.
    Does this not supported in 11g? if it isn't, it does seem like a bug to me. You may always go ahead and raise a case with support.
    Regards,
    Anuj

  • Printing memory and performance optimization

    Hello,
    I am using JVM 1.3 for a big Java Application.
    Print Preview consumes 1.5MB of JVM's memory and performance is slow.
    Please give your valuable ways to reduce memory usage and performance improvement
    will be appreciated.
    /* print method in ScrollablePanel extends JPanel */
         public int print(Graphics g, PageFormat pf, int pi) throws PrinterException
              double pageHeight = 0;
              double pageWidth = 0;
              Graphics2D g2 = (Graphics2D)g;
              pageWidth = pf.getImageableWidth();
              if (pi >= pagecount)
                   return Printable.NO_SUCH_PAGE;
              g2.translate(pf.getImageableX(),pf.getImageableY());
    // < Print Height manipultion>
              g2.setClip(0,(int)(startHeight[pi]), (int) pageWidth, (int)(endHeight[pi] - startHeight[pi]) );
              g2.scale(scaleX,scaleX);
              this.print(g2);
              g2.dispose();
              System.gc();
              return PAGE_EXISTS;
    /* print preview */
    private void pagePreview()
    BufferedImage img = new BufferedImage(m_wPage, m_hPage, BufferedImage.TYPE_INT_ARGB);
    Graphics g = img.getGraphics();
    g.setColor(Color.white);
    g.fillRect(0, 0, m_wPage, m_hPage);
    target.print(g, pageFormat, pageIndex);
    pp = new PagePreview(w, h, img); // pp is JPanel
    g.dispose();
    img.flush();
    m_preview = new PreviewContainer(); //m_preview is JPanel
    m_preview.add(pp);
    ps = new JScrollPane(m_preview);
    getContentPane().add(ps, BorderLayout.CENTER);
    Best Regards,
    Krish

    Good day,
    As I tried, there are two ways of doing printPreview.
    To handle this problem, add only one page at a time.
    To browse through the page use Prev Page, Nexe Page buttons in the toolbar.
    1) BufferedImage - occupies memory .
    Class PagePreview extends JPanel
    public void paint(Graphics g) {
    g.setColor(getBackground());
    g.fillRect(0, 0, getWidth(), getHeight());
    g.drawImage(m_img, 0, 0, this);
    paintBorder(g);
    This gives better performance, but consumes memory.
    2) getPageGraphics in the preview panel . This occupies less memory, but re-paint the graphics everytime when paint(Graphics g) is called.
    Class PagePreview extends JPanel
    public void paint(Graphics g)
    g.setColor(Color.white);
    RepaintManager currentManager = RepaintManager.currentManager(this);
    currentManager.setDoubleBufferingEnabled(false);
    Graphics2D g2 = scrollPanel.getPageGraphics();
    currentManager.setDoubleBufferingEnabled(true);
    g2.dispose();
    This addresses memory problem, but performance is better.
    Is there any additional info from you?
    Good Luck,
    Kind Regards,
    Krish

  • Optimize Mac for Gaming with Bootcamp?

    Hi, I have a Mid 2012 MacBook Pro with Windows 7 installed using Bootcamp. Because I'm a student my parents aren't exactly thrilled with the idea of getting me a Gaming PC, but since I'm not a hardcore gamer I think I can make do with what I have. How would I be able to optimize my gaming experience with my Mac? In other words, whether it is with software, or upgrading some hardware, how can I make my Mac usable for gaming. And trust me, I'm not hoping for 60FPS having games on the highest setting kind of thing, something that is decent quality is all I want. I feel that my Mac's current specs and hardware is almost at where I want it, but some optimization to increase efficiency and performance would be really nice. Thanks in advance!

    you are stuck with whatever graphics it has which is usually the main resource.
    Did you buy with just base RAM? which processor? you can't change the cpu.
    http://www.everymac.com/systems/apple/macbook_pro/index-macbookpro.html
    RAM Upgrade Kits:
    http://eshop.macsales.com/shop/memory/Apple_MacBook_MacBook_Pro/Upgrade/DDR3_160 0MHz_SDRAM
    Some SSD tests http://www.barefeats.com/hard193.html
    https://www.maxupgrades.com/istore/index.cfm?fuseaction=product.display&product_ ID=437&ParentCat=367
    Windows can't be installed on an external drive but it can be cloned and booted off Thunderbolt interface.
    Boot Camp
    A PC with i7-4790 and 16GB + SSD (case, psu, copy of Windows) and X89 motherboard would of course give you freedom and not have to dual boot, plus you get to choose the GPU you need. Windows on Mac often run hot, hotter than they do already. And later you can upgrade just the parts you need to.

  • Plain Explain  and s methods (tools) for  to improve Performance

    Hi
    How can I do to use Plain Explain and others methods for to impove performance ?
    Where can I to find tutorial about it ?
    thank you in advance

    Hi
    How can I do to use Plain Explain and others
    methods for to impove performance ?
    Internally there are potentially several hundred 'procedures' that can be assembled in different ways to access data. For example, when getting one row from a table, you could use an index or a full table scan.
    Explain Plan shows the [proposed] access path, or complete list of the procedures, in the order called, to do what the SQL statement is requesting.
    The objective with Explain Plan is to review the proposed access path and determine whether alternates, through the use of hints or statistics or indexes or materialized views, might be 'better'.
    You often use Wait analysis, through StatsPack, AWR/ADDM, TKProf, Trace, etc. to determine which SQL statement is likely causing a performance issue.
    >
    Where can I to find tutorial about it ?Ah ... the $64K question. If we only knew ...
    There are so many variables involved, that most tutorials are nearly useless. The common approach therefore is to read - a lot. And build up your own 'interpretation' of the reading.
    Personal suggestion is to read (in order)
    1) Oracle's Database Concepts manual (described some of 'how' this is happening)
    2) Oracle's Performance Tuning manual (describes more of 'how' as related to performance and also describes some of the approaches)
    3) Tom Kyte's latest book (has a lot of demos and 'proofs' about how specific things work)
    4) Don Burleson's Statspack book (shows how to set up and do some basic interpretation)
    5) Jonathan's book (how the optimizer works - tough reading, though)
    6_ any book by the Oak Table (http://oaktable.net)
    Beyond that is any book that contains the words 'Oracle' and 'Performance' in the title or description. BUT ... when reading, use truck-loads, not just grains, of salt.
    Verify everything. I have seen an incredible amount of mistakes ... make 'em mysellf all the time, so I tend to recognize them when I see them. Believe nothing unless you have proven it for yourself.. Even then, realize there are exceptions and boundary conditions and ibgs and patches and statistics and CPU and memory and disk speed issues that will change ehat you have proven.
    It's not hopeless. But it is a lot of work and effort. And well rewarded, if you decide to get serious.

  • Calculating the memory and performance of a oracle query

    Hi,
    I am now developing application in java with oracle as a back-end. In my application i require lot of queries to be executed. Hence, the system is getting is slow due to queries.
    So, i planned to develop one Stand-alone application in java, that should show the statistics like, memory and performance. for ex:- if i enter one SQL query in the text box, my standalone application should display, the processing time it requires to fetch the values and the memory is used for that query.
    Can anybody give ideas, suggestion, etc etc...
    Thanks in Advance
    Regards,
    Rajkumar

    This is now Oracle question, not JDBC question. :)
    Followings are sample for explain plan/autotrace/SQL*Trace.
    (You really need to read stuffs like Oracle SQL Tuning books...)
    SQL> create table a as select object_id, object_name from all_objects
    2 where rownum <= 100;
    Table created.
    SQL> create index a_idx on a(object_id);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'A');
    SQL>  explain plan for select from a where object_id = 1;*
    Explained.
    SQL> select from table(dbms_xplan.display());*
    PLAN_TABLE_OUTPUT
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    SQL> set autot on
    SQL> select * from a where object_id = 1;
    no rows selected
    Execution Plan
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    Statistics
    1 recursive calls
    0 db block gets
    1 consistent gets
    0 physical reads
    0 redo size
    395 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);
    -- SQL> alter session set events '10046 trace name context forever, level 12';
    -- SQL> alter session set sql_trace = true;
    PL/SQL procedure successfully completed.
    SQL> select * from a where object_id = 1;
    no rows selected
    * SQL> exec dbms_monitor.session_trace_disable(null, null);*
    -- SQL> alter session set events '10046 trace name context off';
    -- SQL> alter session set sql_trace = false;
    PL/SQL procedure successfully completed.
    SQL> show parameter user_dump_dest
    */home/oracle/admin/WASDB/udump*
    SQL>host
    JOSS:oracle:/home/oracle:!> cd /home/oracle/admin/WASDB/udump
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> ls -lrt
    -rw-r----- 1 oracle dba 2481 Oct 11 16:38 wasdb_ora_21745.trc
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> tkprof wasdb_ora_21745.trc trc.out
    TKPROF: Release 10.2.0.3.0 - Production on Thu Oct 11 16:40:44 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> vi trc.out
    select *
    from
    a where object_id = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 1 0 0
    total 3 0.00 0.00 0 1 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 55
    Rows Row Source Operation
    0 TABLE ACCESS BY INDEX ROWID A (cr=1 pr=0 pw=0 time=45 us)
    0 INDEX RANGE SCAN A_IDX (cr=1 pr=0 pw=0 time=39 us)(object id 65441)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 25.01 25.01
    Hope this helps

  • Hyper-V Resource Pools for Memory and CPU

    Hi all,
    I'm trying to understand the concepts and details of resource pools in Hyper-V in Windows Server 2012. It seems as if there is almost no documentation on all that. Perhaps somebody can support me here, maybe I've not seen some docs yet.
    So far, I learned that resource pools in their current implementation serve mainly for metering purposes. You can create pools per tenant and then group VM resources into those pools to facilitate resource metering per tenant. That is, you enable metering
    once per pool and get all the data necessary to bill that one customer for all their resources (without metering individual VMs). Is that correct?
    Furthermore, it seems to me that an ethernet pool goes one step further by providing an abstraction level for virtual switches. As far as I've understood you can add multiple vSwitches to a pool and then connect a VM to the pool. Hyper-V then decides which
    actual switch to use. This may be handy in a multi-host environment if vSwitches on different hosts use different names although they connect to the same network. Is that correct?
    So - talking about actually managing that stuff I've learned how to create a pool and how to add VHD locations and virtual switches to a pool. Enabling resource metering for a pool then collects usage data from all the resources inside that pool.
    But now: I can create a pool for memory and a pool for CPU. But I cannot add resources to those. Neither can I add a complete VM to a pool. Now I'm launching a VM that belongs to a customer whose resources I'm metering. How will Hyper-V know that it's
    supposed to collect data on CPU and memory usage for that VM?
    Am I missing something here? Or is pool-based metering only good for ethernet and VHD resources, and CPU and memory still need to be metered per VM?
    Thanks for clarification,
    Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

    Thank you for the links. I already knew those, and unfortunately they are not matching my question. Two of them are about Windows Server 2008/R2, and one only lists a WMI interface. What I'm after is a new feature in Windows Server 2012, and I need conceptional
    information.
    Thanks for the research anyway. I appreciate that a lot!
    In the meantime I've gotten quite far in my own research. See my entry above of January 7th. Some additions:
    In Windows Server 2012, Hyper-V resource pools are mainly for metering purposes. You cannot compare them to resource pools in VMware.
    A resource pool in Hyper-V (2012) facilitates resource metering and billing for VM usage especially in hosting scenarios. You can either measure resource usage for single VMs, or you can group existing resources (such as CPU power, RAM, virtual hard disk
    storage, Ethernet traffic) into pools. Those pools will mostly be assigned to one customer each. That way you can bill the customer for their resource usage in a given time period by just querying the customer's pool.
    Metering only collects aggregated data with one value per resource (i.e. overall CPU usage, maximum VHD storage, summed Ethernet traffic and so on). You can control the time period by explicitly resetting the counter at any given time (a day, a week, a
    month or what you like).
    There is no detailed data. The aggregate values serve as a basis for billing, not as monitoring data. If you need detailed monitoring data use Performance Monitor.
    There is currently only one type of resource pool that adds an abstraction layer to a virtualization farm, and that is the Ethernet type. You can use that type for metering, but you can also use it to group a number of virtual switches (that connect to
    the same network segment) and then a VM connected to that pool will automatically use an appropriate virtual switch from the pool. You need no longer worry about virtual switch names across multiple hosts as long as all equivalent virtual switches are
    added to the pool.
    While you can manage two types of pool resources in the GUI (VHD pools and Ethernet pools) you should only manage resource pools via PowerShell. Only there will you be able to control what happens. And only PowerShell provides a means to start, stop, and
    reset metering and query metering data.
    The process to use resource pools in Hyper-V (2012) in short:
    First create a new pool via PowerShell (New-VMResourcePool). (In case of a VHD pool you must specify the VHD storage paths to add to the pool in the moment you create the pool.)
    In case of an Ethernet pool add existing virtual switches to the pool (Add-VMSwitch).
    Reconfigure existing VMs that you want to measure so that they use resources from the pool. The PowerShell
    Set-VM* commands accept a parameter -ResourcePoolName to do that. Example:
    Set-VMMemory -VMName APP-02 -ResourcePoolName MyPool1
    Start measuring with Enable-VMResourceMetering.
    Query collected data as often as you need with Measure-VMResourcePool.
    Note that you should specify the pool resource type in the command to get reliable data (see my post above, Jan 7th).
    When a metering period (such as a week or a month) has passed, reset the counter to zero with
    Reset-VMResourceMetering.
    Hope that helps. I consider this the answer to my own question. ;)
    Here's some links I collected:
    http://itproctology.blogspot.ca/2012/12/hyper-v-resource-pool-introduction.html
    http://www.ms4u.info/2012/12/configure-ethernet-resource-pool-in.html
    http://blogs.technet.com/b/virtualization/archive/2012/08/16/introduction-to-resource-metering.aspx
    http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/1ce4e2b2-8fdd-4f16-8ab6-e1e1da6d07e3
    Best wishes, Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

  • ORACLE OBJECTS FOR OLE(OO4O) PERFORMANCE TUNING

    제품 : ORACLE SERVER
    작성날짜 : 1997-10-10
    ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
    의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
    그래서 튜닝을 위해서는
    Windows 3.1의 경우는 c:/windows/oraole.ini
    WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
    parameters를 수정해야 합니다.
    만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
    된 Help를 자세히 읽어보고 적용을 해야합니다.
    FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
    속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
    Tuning and Customization
    A number of working parameters of Oracle Objects for OLE can be
    customized. Access to these parameters is provided through the Oracle
    initialization file, by default named ORAOLE.INI.
    Each entry currently available in that file is described below. The location
    of the ORAOLE.INI file is specified by the ORAOLE environment variable.
    Note that this variable should specify a full pathname to the Oracle
    initialization file, which is not necessarily named ORAOLE.INI. If this
    environment variable is not set, or does not specify a valid file entry, then
    Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
    directory. If this file does not exist, all of the default values
    listed will apply.
    You can customize the following sections of the ORAOLE.INI file:
    [Cache Parameters]
    A cache consisting of temporary data files is created to manage amounts
    of data too large to be maintained exclusively in memory. This cache
    is needed primarily for dynaset objects, where, for example, a single
    LONG RAW column can contain more data than exists in physical
    (and virtual) emory.
    The default values have been chosen for simple test cases, running on a machine
    with limited Windows resources. Tuning with respect to your machine and
    applications is recommended.
    Note that the values specified below are for a single cache, and that a separate
    cache is allocated for each object that requires one. For example, if
    your application contains three dynaset objects, three independent data
    caches are constructed, each using resources as described below.
    SliceSize = 256 (default)
    This entry specifies the minimum number of bytes used to store a piece
    of data in the cache. Items smaller than this value are allocated the
    full SliceSize bytes for storage; items larger than this value are
    allocated an integral multiple of this space value. An example of an
    item to be stored is a field value of a dynaset.
    PerBlock = 16 (default)
    This entry specifies the number of Slices (described in the preceding
    entry) that are stored in a single block. A block is the minimum unit
    of memory or disk allocation used within the cache. Blocks are read
    from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
    size is 256 * 16 = 4096 bytes.
    CacheBlocks = 20 (default)
    This entry specifies the maximum number of blocks held in memory at any
    one time. As data is added to the cache, the number of used blocks
    grows until the value of CacheBlocks is reached. Previous blocks are
    swapped from memory to the cache temporary disk file to make room for
    more blocks. The blocks are swapped based upon recent usage. The total
    amount of memory used by the cache is calculated as the product of
    (SliceSize * PerBlock * CacheBlocks).
    Recommended Values: You may need to experiment to find optimal cache parameter
    values for your applications and machine environment. Here are some guidelines
    to keep in mind when selecting different values:
    The larger the (SliceSize * PerBlock) value, the more disk I/O is
    required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
    more likely it is that blocks will need to be swapped to or from disk.
    The larger the CacheBlocks value, the more memory is required, but the
    less likely it is that Swapping will be required.
    A reasonable experiment for determining optimal performance might
    proceed as follows:
    Keep the SliceSize >= 128 and vary PerBlock to give a range of block
    sizes from 1K through 8K.
    Vary the CacheBlocks value based upon available memory. Set it high
    enough to avoid disk I/O, but not so high that Windows begins swapping
    memory to disk.
    Gradually decrease the CacheBlocks value until performance degrades or
    you are satisfied with the memory usage. If performance drops off,
    increase the CacheBlocks value once again as needed to restore
    performance.
    [Fetch Parameters]
    FetchLimit = 20 (default)
    This entry specifies the number of elements of the array into which data
    is fetched from Oracle. If you change this value, all fetched values
    are immediately placed into the cache, and all data is retrieved from
    the cache. Therefore, you should create cache parameters such that all
    of the data in the fetch arrays can fit into cache memory. Otherwise,
    inefficiencies may result.
    Increasing the FetchLimit value reduces the number of fetches (calls
    to the database) calls and possibly the amount of network traffic.
    However, with each fetch, more rows must be processed before user
    operations can be performed. Increasing the FetchLimit increases
    memory requirements as well.
    FetchSize = 4096 (default)
    This entry specifies the size, in bytes, of the buffer (string) used for
    retrieved data. This buffer is used whenever a long or long raw column
    is initially retrieved.
    [General]
    TempFileDirectory = [Path]
    This entry provides one method for specifying disk drive and directory
    location for the temporary cache files. The files are created in the
    first legal directory path given by:
    1.The drive and directory specified by the TMP environment variable
    (this method takes precedence over all others);
    2.The drive and directory specified by this entry (TempFileDirectory)
    in the [general] section of the ORAOLE.INI file;
    3.The drive and directory specified by the TEMP environment variable; or
    4.The current working drive and directory.
    HelpFile = [Path and File Name]
    This entry specifies the full path (drive/path/filename) of the Oracle Objects
    for OLE help file as needed by the Oracle Data Control. If this entry cannot
    be located, the file ORACLEO.HLP is assumed to be in the directory where
    ORADC.VBX is located
    (normally \WINDOWS\SYSTEM).

    제품 : ORACLE SERVER
    작성날짜 : 1997-10-10
    ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
    의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
    그래서 튜닝을 위해서는
    Windows 3.1의 경우는 c:/windows/oraole.ini
    WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
    parameters를 수정해야 합니다.
    만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
    된 Help를 자세히 읽어보고 적용을 해야합니다.
    FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
    속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
    Tuning and Customization
    A number of working parameters of Oracle Objects for OLE can be
    customized. Access to these parameters is provided through the Oracle
    initialization file, by default named ORAOLE.INI.
    Each entry currently available in that file is described below. The location
    of the ORAOLE.INI file is specified by the ORAOLE environment variable.
    Note that this variable should specify a full pathname to the Oracle
    initialization file, which is not necessarily named ORAOLE.INI. If this
    environment variable is not set, or does not specify a valid file entry, then
    Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
    directory. If this file does not exist, all of the default values
    listed will apply.
    You can customize the following sections of the ORAOLE.INI file:
    [Cache Parameters]
    A cache consisting of temporary data files is created to manage amounts
    of data too large to be maintained exclusively in memory. This cache
    is needed primarily for dynaset objects, where, for example, a single
    LONG RAW column can contain more data than exists in physical
    (and virtual) emory.
    The default values have been chosen for simple test cases, running on a machine
    with limited Windows resources. Tuning with respect to your machine and
    applications is recommended.
    Note that the values specified below are for a single cache, and that a separate
    cache is allocated for each object that requires one. For example, if
    your application contains three dynaset objects, three independent data
    caches are constructed, each using resources as described below.
    SliceSize = 256 (default)
    This entry specifies the minimum number of bytes used to store a piece
    of data in the cache. Items smaller than this value are allocated the
    full SliceSize bytes for storage; items larger than this value are
    allocated an integral multiple of this space value. An example of an
    item to be stored is a field value of a dynaset.
    PerBlock = 16 (default)
    This entry specifies the number of Slices (described in the preceding
    entry) that are stored in a single block. A block is the minimum unit
    of memory or disk allocation used within the cache. Blocks are read
    from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
    size is 256 * 16 = 4096 bytes.
    CacheBlocks = 20 (default)
    This entry specifies the maximum number of blocks held in memory at any
    one time. As data is added to the cache, the number of used blocks
    grows until the value of CacheBlocks is reached. Previous blocks are
    swapped from memory to the cache temporary disk file to make room for
    more blocks. The blocks are swapped based upon recent usage. The total
    amount of memory used by the cache is calculated as the product of
    (SliceSize * PerBlock * CacheBlocks).
    Recommended Values: You may need to experiment to find optimal cache parameter
    values for your applications and machine environment. Here are some guidelines
    to keep in mind when selecting different values:
    The larger the (SliceSize * PerBlock) value, the more disk I/O is
    required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
    more likely it is that blocks will need to be swapped to or from disk.
    The larger the CacheBlocks value, the more memory is required, but the
    less likely it is that Swapping will be required.
    A reasonable experiment for determining optimal performance might
    proceed as follows:
    Keep the SliceSize >= 128 and vary PerBlock to give a range of block
    sizes from 1K through 8K.
    Vary the CacheBlocks value based upon available memory. Set it high
    enough to avoid disk I/O, but not so high that Windows begins swapping
    memory to disk.
    Gradually decrease the CacheBlocks value until performance degrades or
    you are satisfied with the memory usage. If performance drops off,
    increase the CacheBlocks value once again as needed to restore
    performance.
    [Fetch Parameters]
    FetchLimit = 20 (default)
    This entry specifies the number of elements of the array into which data
    is fetched from Oracle. If you change this value, all fetched values
    are immediately placed into the cache, and all data is retrieved from
    the cache. Therefore, you should create cache parameters such that all
    of the data in the fetch arrays can fit into cache memory. Otherwise,
    inefficiencies may result.
    Increasing the FetchLimit value reduces the number of fetches (calls
    to the database) calls and possibly the amount of network traffic.
    However, with each fetch, more rows must be processed before user
    operations can be performed. Increasing the FetchLimit increases
    memory requirements as well.
    FetchSize = 4096 (default)
    This entry specifies the size, in bytes, of the buffer (string) used for
    retrieved data. This buffer is used whenever a long or long raw column
    is initially retrieved.
    [General]
    TempFileDirectory = [Path]
    This entry provides one method for specifying disk drive and directory
    location for the temporary cache files. The files are created in the
    first legal directory path given by:
    1.The drive and directory specified by the TMP environment variable
    (this method takes precedence over all others);
    2.The drive and directory specified by this entry (TempFileDirectory)
    in the [general] section of the ORAOLE.INI file;
    3.The drive and directory specified by the TEMP environment variable; or
    4.The current working drive and directory.
    HelpFile = [Path and File Name]
    This entry specifies the full path (drive/path/filename) of the Oracle Objects
    for OLE help file as needed by the Oracle Data Control. If this entry cannot
    be located, the file ORACLEO.HLP is assumed to be in the directory where
    ORADC.VBX is located
    (normally \WINDOWS\SYSTEM).

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Memory and performance  when copying a sorted table to a standard table

    Hello,
    As you all probably know, it's not possible to use a sorted table as a tables parameter of a function module, but sometimes you want to use a sorted table in your function module for performance reasons, and at the end of the function module, you just copy it to a standard table to return to the calling program.
    The problem with this is that at that moment, the contents of the table is in memory twice, which could result in the well known STORAGE_PARAMETERS_WRONG_SET runtime exception.                                                                               
    I've been looking for ways to do this without using an excessive amount of memory and still being performant.  I tried four methods, all have their advantages and disadvantages, so I was hoping someone here could help me come up with the best way to do this.  Both memory and performance are an issue. 
    Requirements :
    - Memory usage must be as low as possible
    - Performance must be as high as possible
    - Method must work on all SAP versions from 4.6c and up
    So far I have tried 3 methods.
    I included a test report to this message, the output of this on my dev system is :
    Test report for memory usage of copying tables    
    table1[] = table2[]                                        
    Memory :    192,751  Kb                                    
    Runtime:    436,842            
    Loop using workarea (with delete from original table)      
    Memory :    196,797  Kb                                    
    Runtime:  1,312,839        
    Loop using field symbol (with delete from original table)  
    Memory :    196,766  Kb                                    
    Runtime:  1,295,009                                                                               
    The code of the program :
    I had some problems pasting the code here, so it can be found at [http://pastebin.com/f5e2848b5|http://pastebin.com/f5e2848b5]
    Thanks in advance for the help.
    Edited by: Dries Horions on Jun 19, 2009 1:23 PM
    Edited by: Dries Horions on Jun 19, 2009 1:39 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM

    I've had another idea:
    Create a RFC function like this (replace SOLI_TAB with your table types):
    FUNCTION Z_COPY_TABLE .
    *"*"Lokale Schnittstelle:
    *"  IMPORTING
    *"     VALUE(IT_IN) TYPE  SOLI_TAB
    *"  EXPORTING
    *"     VALUE(ET_OUT) TYPE  SOLI_TAB
    et_out[] = it_in[].
    ENDFUNCTION.
    and then try something like this in your program:
    DATA: gd_copy_done TYPE c LENGTH 1.
    DATA: gt_one TYPE soli_tab.
    DATA: gt_two TYPE soli_tab.
    PERFORM move_tables.
    FORM move_tables.
      CLEAR gd_copy_done.
      CALL FUNCTION 'Z_COPY_TABLE'
        STARTING NEW TASK 'ztest'
        PERFORMING copy_done ON END OF TASK
        EXPORTING
          it_in = gt_one[].
      CLEAR gt_one[].
      WAIT UNTIL gd_copy_done IS NOT INITIAL.
    ENDFORM.
    FORM copy_done USING ld_task TYPE clike.
      RECEIVE RESULTS FROM FUNCTION 'Z_COPY_TABLE'
       IMPORTING
         et_out        = gt_two[].
      gd_copy_done = 'X'.
    ENDFORM.
    Maybe this is a little bit faster than the Memory-Export?
    Edited by: Carsten Grafflage on Jul 20, 2009 11:06 AM

  • Unaccounted for Memory is too big and lead to Native Memory Issuse.

    In our server, after running 1 month, Unaccounted memory will increase up to 500m or higher. And the native memory will be big. So it lead to OOM. Below are one sample:
    j2eeapp:jhf1wl101:root > jrcmd 27398 print_memusage
    27398:
    [JRockit] memtrace is collecting data...
    [JRockit] *** 19th memory utilization report
    (all numbers are in kbytes)
    Total mapped ;;;;;;;5100644
    ; Total in-use ;;;;;;4038952
    ;; executable ;;;;; 75968
    ;;; java code ;;;; 23680; 31.2%
    ;;;; used ;;; 21833; 92.2%
    ;; shared modules (exec+ro+rw) ;;;;; 4858
    ;; guards ;;;;; 5928
    ;; readonly ;;;;; 0
    ;; rw-memory ;;;;;3986664
    ;;; Java-heap ;;;;3145728; 78.9%
    ;;; Stacks ;;;; 126050; 3.2%
    ;;; Native-memory ;;;; 714885; 17.9%
    ;;;; java-heap-overhead ;;; 99596
    ;;;; codegen memory ;;; 1088
    ;;;; classes ;;; 166656; 23.3%
    ;;;;; method bytecode ;; 13743
    ;;;;; method structs ;; 21987 (#281446)
    ;;;;; constantpool ;; 72105
    ;;;;; classblock ;; 7711
    ;;;;; class ;; 11900 (#21166)
    ;;;;; other classdata ;; 22950
    ;;;;; overhead ;; 114
    ;;;; threads ;;; 960; 0.1%
    ;;;; malloc:ed memory ;;; 81024; 11.3%
    ;;;;; codeinfo ;; 4815
    ;;;;; codeinfotrees ;; 2614
    ;;;;; exceptiontables ;; 1790
    ;;;;; metainfo/livemaptable ;; 24519
    ;;;;; codeblock structs ;; 20
    ;;;;; constants ;; 33
    ;;;;; livemap global tables ;; 8684
    ;;;;; callprof cache ;; 0
    ;;;;; paraminfo ;; 255 (#2929)
    ;;;;; strings ;; 24040 (#345745)
    ;;;;; strings(jstring) ;; 0
    ;;;;; typegraph ;; 10132
    ;;;;; interface implementor list ;; 260
    ;;;;; thread contexts ;; 598
    ;;;;; jar/zip memory ;; 12204
    ;;;;; native handle memory ;; 486
    ;;;; unaccounted for memory ;;; 366520; 51.3%;4.52
    ---------------------!!!

    >
    "No one is perfect - not even Mac OS X. If a program manages to lock up central processes, a restart will be needed."
    That first part of what you said there is indeed true.
    But a modern OS is designed to keep processes separated. An application crash
    SHOULD NOT
    require a complete shut-down and reboot of your system. Yes, the Log-Out/Log back in process might take awhile if you have a particularly bad application crunch, because the OS has detected that something went screwy and is checking to see that the user account is healthy enough to run, and may be fixing some things in the process.
    I've run all kinds of not-quite-polished software over the years since my adoption of OS X, and no matter how badly some of it performed nothing ever required me to reboot my system to restore operating health. Now, that's not to say I don't run system maintenance utilities which, after performing their routines, suggest or require a shutdown restart. I usually only do this if I've decided to delete the offending application from my system. (Sidebar: How diligent are you about maintaining the general health of your system through the regular practice of running preventative maintenance routines? Ramon may be along shortly to lay the boiler-plate on you about this :))
    Does Photoshop dig its hooks so deeply into the root level of the OS that it could cause the kind of problems you've had? I don't know for sure, but I'd guess that it's possible. And I'd suggest that, if wonky Photoshop behavior can be so bad that it
    requires
    the user to restart in order to regain operational health, then something is VERY wrong. And I'd go even further out on a limb to guess that this is a fault in Adobe's Photoshop coding, and not in Apple's OS coding.

  • I have a few wedding projects(1-2 hours)I am trying to export at full hd quality,than burn in idvd.After rendering for 8hrs I receive error code that states "file is too big". Please help? compressing tips without losing quality?

    I have a few wedding projects(1-2 hours)I am trying to export at full hd quality,than burn in idvd. After rendering for 8hrs I receive error code that states "file is too big". Please help? compressing tips without losing quality? or any other exporting alternatives?

    Hey Z,
    Thank you for the tip on exporting by media browser (large) from imovie. But of course, if it's not one thing it's another. Now that I figured how to export a large file from imovie, I have an idvd issue. I followed the instructions for burning from idvd and changing the encoding to professional quality and the burn speed to x4, but I am receiving an error that states the following,
    Your project exceeds the maximum content duration. To burn your DVD, change the encoder setting in the Project Info window.
    Project:
    - total project duration: 79:04 minutes
    - total project capacity: 4.327 GB (max. available: 4.172 GB)
    Menus:
    - number of menus in project: 1 menus
    - total menu duration: 0:39 minutes
    - total menu capacity: 37.370 MB
    Movies:
    - total movies duration: 78:25 minutes
    - total movies capacity: 4.291 GB
    I have searched in the idvd forum for similar issues and I am stumped at this point. I have tried deleting the encoding assets and re launching idvd with the changed preferences, and still the same error. I know you mentioned something about free hard drive space available, and I have very little left. 4GB to be exact due to massive hours of non-edited footage. I am not sure if this is why, but I do not recall ever needing free space to burn memory onto a separate dvd. I would be more than happy if I am wrong, and it would be a quick fix. Otherwise, the technical nightmare continues. It's all a learning process and your expertise is greatly appreciated! Thanks in advance.

  • Audition CS6 - "An error was encountered: Not enough memory to perform operation"

    When I try to export my entire session using the "Multitrack Mixdown" I get this error message: "An error was encountered: Not enough memory to perform operation"
    To give context:
    • Audition CS6
    • All other applications are closed
    • 26 audio tracks
    • Approx. 22 seconds long in total
    • Average of 2 effects per track
    • I've reinstalled the program twice now
    My computer:
    • iMac (2010)
    • 3.4GHz Intel Core i7
    • 16 GB 1333 MHz DDr3
    • Flash and HDD storage (which are not full)
    • OSX 10.8.2
    If anyone can offer any advice, that would be great!
    I have to get this done by this Monday (May 6, 2013).
    So I really hope Audition won't fail me, since it has always been successul until now (which is why I'm stumped).

    'Not enough memory' hasn't got anything to do with your HD, but the amount of RAM available to Audition at the time it needs it. All DAWs, to work efficiently and successfully, need most of the machine to be free from running other apps. The reason for this is simple; If another app trys to hog a machine that's running a process in real time, it inevitably hiccups the machine. And if this other app has reserved RAM addresses for its own use, they aren't available until it's stopped.
    Typical nasties involve anti-virus software, and the machine doing things like repeatedly polling a wireless connection, but there are plenty more, and this applies equally to Macs and PCs.
    So the 'update' you need to rectify most of these problems is one you have to carry out yourself. Audio editing is a bit of an oddball from this POV - you invariably need a very lightly loaded machine for it to be reliably successful. The machine I use as a DAW doesn't do anything else at all; all unnecessary services are stopped whilst it's in this mode, and the physical spec of it is similar to yours - except it's a PC running W7-64Pro.

Maybe you are looking for

  • How Far Does The Ejected Disc Come Out?

    When you eject a disc out of your iMac, how far does it come out? Mine stops right before the middle circle. On my iBook it used to come out to about half way through the circle. Thanks, Scott

  • How can I reverse a photo image where newsprint is backward?

    how can I reverse a photo image where newsprint is backward?

  • Error  'Java' is undefined

    Hi Can anyone help to solve this problem I am trying to use javascript to get information from the System.getProperty function Slide1() { var str = java.lang.System.getProperty("user.name")       document.box.boxtext.value= str      }But I receive th

  • WhiteLevel consideration

    The WhiteLevel tag specifies the clipping level of the raw data; a raw processor can determine, which pixels are "valid" based on this specification. Unfortunately, some cameras are not consistent, the clipping level changes between camera copies, th

  • Hardisk on Laptop to small for Ipod's capacity

    I only have 60 GB on my Laptop, which I constantly use and have a internet connection, so podcasts are downloaded here, but most of my music/podcasts is scattered on a computer without internet connection or on DVD/CD and 2,5" harddisks. What is the