Optimizing Performance

I am having a problem with my program as I am not getting the
desired frame rate due to all the code that is getting executed per
tick. So I have some questions about director and lingo as to which
way actually executes faster.
1. Multiple
ExitFrame calls vs a single
ExitFrame call.
I have alot of sprites in my app. Almost all of them have an
ExitFrame handler.
Question: is it faster to have each sprite handle it's own
ExitFrame routine and do code specific to that sprite or is
it faster to have one generic
ExitFrame to loop through and execute code for each sprite?
2. Puppeted sprites vs Non-Puppeted sprites.
I have a alot of sprites in my program. To make life ALOT
easier, I simply allocated a good chunk of sprite channels to sole
use of "dynamically created sprites". My program can have hunders
of puppeted sprites from any given moment to the next.
Question: Does director progress faster or slower depending
on if a sprite is puppeted or not? Or is there any difference at
all?
3. Checking to see if a variable is set before setting it.
I have only recently come into the Director/Lingo world of
programming. I am originally a VB programmer for almost a decade.
In visual basic, I have noticed that the code executes faster if
you don't do unneeded variable assignments by checking to see if it
was already set.
Example: In visual basic, let's say you have an array of 1000
elements, some elements are already set, some are not.
for i = 1 to 1000
var(i) = i
next
The above code executes fast, but if you are doing that very
very often, it can be a bottle neck.
the below code, while doing the exact same thing, actually is
faster.
for i = 1 to 1000
if var(i) <> i then var(i) = i
next
In VB, it's faster to do a check of a variable than it is to
do the assignment when it's not needed. Now granted, this is a poor
example, usually I am dealing with much more complex routines, but
the basic principle of what I am trying to get across is the same.
Question: in Director/lingo, would it speed up the execution
of code to do a variable check before the assignment, or is the
very act of adding the check going to slow the down the execution?
Anyone have any ideas about these? Or anyone have any other
tips about stupid little things to speed up execution of
code?

>
1. Multiple
ExitFrame calls vs a single
ExitFrame
> call.
You should consider dropping the exitframe approach, in favor
of an oop
model.
OOP is not faster, as a dual core processor is not faster
than a single core
one running at double the speed. In fact, the second should
be faster, since
there is no synchronization penalty. However, it is much
smoother. Same with
oop, you have a penalty, since you are using more objects,
but the objects
can be smart enough to adjust the number of instructions they
execute as
required.
If you e.g. have objects whose coordinates can be calculated
and stored
inside the object, you don't have to update the stage each
time an object
moves. You can do that once, for all objects in set
intervals. Long as the
interval is large enough to handle all intermediate
processing and
updatestage cost, you'll have a very smooth movie.
>
2. Puppeted sprites vs Non-Puppeted sprites.
Puppeting does not affect performance -or at least it
shouldn't. The number
of sprites, and number of behaviors attached to each sprite
does. However,
even when there is a very large number of sprites active, the
procedure
should be a joke for any modern cpu. What does cost, is
redrawing the
sprites. So, if it's image sprites we are talking about, you
should perhaps
consider a single bitmap member you should use as a buffer,
and imaging
lingo for drawing each frame. The mouse click events can be
evaluated by
keeping a list of virtual sprite positions. Even if not
familiar with the
above, the time you'll invest in learning what is required,
will be rewarded
with a significant -up to times x- performance increase.
>
3. Checking to see if a variable is set before setting it.
You can create a simple lingo benchmarking script to get your
answers. As a
general principle, the less commands the faster. Though not
really into VB
(I find c++ and lingo to be a killer combination), I can
assume why this is
happening: when setting a variable, vb is executing some code
to evaluate
what the value was, and what -if anything- has to be
released. Though
nowhere documented, it seems that several years ago, someone
in the director
dev team was smart enough to take this matter into account
when creating the
object that is known as a lingo variable (64bit internally,
btw). So,
director doesn't suffer slow variable release - releasing
what shouldn't be
released that is.
> Anyone have any ideas about these? Or anyone have any
other tips about
> stupid
> little things to speed up execution of code?
You know, a few years ago, lingo performance/speeding up
director was a
regular discussion issue in this list. This is not the case
anymore. And
though I can guess a couple reasons why, I found none to be
qualified as an
explanation.. Not in my book at least. Case you have any more
questions, I'd
be happy to answer. Building a site with director performance
hints /
optimizing lingo code is high in my to do list.
"DaveGallant" <[email protected]> wrote in
message
news:[email protected]...
>I am having a problem with my program as I am not getting
the desired frame
> rate due to all the code that is getting executed per
tick. So I have some
> questions about director and lingo as to which way
actually executes
> faster.
>
>
1. Multiple
ExitFrame calls vs a single
ExitFrame
> call.
>
> I have alot of sprites in my app. Almost all of them
have an
>
ExitFrame
> handler.
> Question: is it faster to have each sprite handle it's
own
>
ExitFrame
> routine and do code specific to that sprite or is it
faster to have one
> generic
>
ExitFrame to loop through and execute code for each sprite?
>
>
2. Puppeted sprites vs Non-Puppeted sprites.
>
> I have a alot of sprites in my program. To make life
ALOT easier, I simply
> allocated a good chunk of sprite channels to sole use of
"dynamically
> created
> sprites". My program can have hunders of puppeted
sprites from any given
> moment
> to the next.
> Question: Does director progress faster or slower
depending on if a sprite
> is
> puppeted or not? Or is there any difference at all?
>
>
3. Checking to see if a variable is set before setting it.
>
> I have only recently come into the Director/Lingo world
of programming. I
> am
> originally a VB programmer for almost a decade. In
visual basic, I have
> noticed
> that the code executes faster if you don't do unneeded
variable
> assignments by
> checking to see if it was already set.
>
> Example: In visual basic, let's say you have an array of
1000 elements,
> some
> elements are already set, some are not.
>
> for i = 1 to 1000
> var(i) = i
> next
>
> The above code executes fast, but if you are doing that
very very often,
> it
> can be a bottle neck.
> the below code, while doing the exact same thing,
actually is faster.
>
> for i = 1 to 1000
> if var(i) <> i then var(i) = i
> next
>
> In VB, it's faster to do a check of a variable than it
is to do the
> assignment
> when it's not needed. Now granted, this is a poor
example, usually I am
> dealing
> with much more complex routines, but the basic principle
of what I am
> trying to
> get across is the same.
>
> Question: in Director/lingo, would it speed up the
execution of code to do
> a
> variable check before the assignment, or is the very act
of adding the
> check
> going to slow the down the execution?
>
>
>
> Anyone have any ideas about these? Or anyone have any
other tips about
> stupid
> little things to speed up execution of code?
>

Similar Messages

  • Coldfusion 11 SSL Certs applied - The APR based Apache Tomcat library which allows optimal performance in production environments,

    Coldfusion 11
    Windows Server 2012 R2
    Both the Coldfusion admin and additonal site work fine on HTTP.
    As soon as I attempt to enable SSL websockets and install SSL certs, the Coldfusion 11 Application service will not start. I followed the steps below....
    Coldfusion 11 - Web Sockets via SSL
    The Coldfusion-error.log shows
    Jan 26, 2015 3:21:23 PM org.apache.catalina.core.AprLifecycleListener init
    INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path
    Server was a cloned VM of the test server with developer copy of CF11, but license has been purchased and applied. SSL certs have been imported successfully, paths are correct in CF Admin to the cert file etc.
    Do I need to install another version of Coldfusion to get around this issue or is there a download update I need to apply?
    If i reconfig the \cfusion\runtime\conf\server.xml to comment out the SSL sections it works fine.
    Any assistance welcome - I can't allow this site to made publicly available with using SSL.
    SM

    @Scott, first are you running update 3? If so, let’s clarify at the outside that, as that bug report (you point to) does indicate in the notes below it, there is a fix for a problem where this feature broke in that release.  And as it notes, you can email [email protected] to request the fix (referring to that bug), or you can wait for it to be released publicly as part of a larger set of fixes.
    If you are NOT on update 3, or you may apply the fix and find things still don’t work, I would wonder about a few things, from what you’ve described.
    First, you say that the CF service won’t start, and you offer some lines from the ColdFusion-error log. Just to be clear, those particular error messages are common and nothing to worry about. They definitely do NOT reflect any reason CF doesn’t start. But are you confirming that that time (in the log lines) is in fact the time that you had started CF, when it would not start? I’d suspect not.
    Look instead in the coldfusin-out.log. What does THAT log show at the time you try to start CF and it won’t start? You may find something else there. (And since you refer to editing the server.xml file, you may the log complains that because of an error in the XML it can’t “parse” the file. It’s worth checking.
    You say also that you have confirmed that “paths are correct in CF Admin to the cert file”. What path are you referring to? There’s no page in the CF admin that points to the CACERTS file in which the certs are stored. Do you perhaps mean on the “system info” or “settings summary” page? Even so there’s still no line in there which refers to the “cert file”.
    Instead—and this could be a part of your problem—the cert file is simply found WITHIN the directory where CF’s pointed to to find its JVM. Wherever THAT is, is where you need to put any certificates. So take a look at the CF Admin, either in the ”java and jvm” page (and the value of its “Java Virtual Machine Path”), or in the “settings summary” or “system information” pages and their value for “Java Home”. Is that something like \coldfusion11\jre? Or something like \Java\jdk1.7.0_71\jre? Whichever it is, THAT’s where you need to put the certs, within there (in its \lib\security folder).
    Finally, when you say that if you “comment out the SSL sections  it works fine”, do you mean that a) CF comes up and b) some example code calling your socket works, as long as you don’t use SSL?
    To be clear, no, you don’t need any other version of CF11 to get websockets to work. But if you are on update 3, that may be the simple problem. Let us know how it goes for you with this info.
    /charlie

  • I have a 27" iMac and want to play Windows games with optimal performance.  what do I need to do?

    I have a 27" iMac and want to play Windows games with optimal performance.  What do I need to do?

    Install windows with Bootcamp. http://www.apple.com/support/bootcamp/

  • OWA doesn't work on all servers! "Use the following link to open this mailbox with optimal performance:"

    Hi All,
    I am having a problem with OWA. I have 3  2007 exchange servers on 3 different sites on the same AD Forest. My emails work fine internally and externally with outlook. I have setup Outlook Anywhere on Server No1 with external email address:
    https://mail.company.com/owa
    Everybody can login fine.
    I have setup OWA on the other 2 servers with the same address, but when a user from that site tries to login gets the following message:
    "Use the following link to open this mailbox with optimal performance:"
    So the can't see their emails!
    Please help me out as I am trying to solve it for days now and I can't find out what is wrong! 
    Thank you All!
    Akitan

    Hi Akitan,
    From your description, CAS server 1 is exposed to the Internet. Other two non-Internet-facing Active Directory sites rely on the Internet-facing Client Access server 1 to proxy all pertinent requests from external clients. If I have misunderstood your concern,
    please let me know.
    In your case, I recommend you ensure that Integrated Windows authentication for OWA virtual directory is enabled.
    What's more, here is a helpful thread for your reference.
    CAS Proxy between sites of OWA /Exchange Virtual directory
    http://social.technet.microsoft.com/Forums/exchange/en-US/895a304f-8fb1-4909-8b48-480a7303afd4/cas-proxy-between-sites-of-owa-exchange-virtual-directory?forum=exchangesvrclientslegacy
    Besides, if your issue is urgent, you can connect Microsoft Support. For your convenience:
    https://support.microsoft.com/?ln=en-us&wa=wsignin1.0
    Anyway, if you still want to solve your issue on the forum, I will continue to help you.
    If you need further assistance, please feel free to let me know.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Not getting optimal performance out of premiere Pro CC with New Mac Pro

    I have just purchased a new mac pro- 8core 64gb of RAM, and D700 graphics cards. I say I'm not getting full optimized performance because my machine will not playback 4k footage at full resolution. Should it? I looked into the package contents of Premiere Pro and notive that the D700 graphics cards were not listed under supported cards. If I type those in will that fix the issue?

    No it wont. No editing software out right now will play back 4K media such as Red at full resolution. The latency for processing the codecs is just to high right now. Redcine X Pro can with Red if you have the GPU accelerated Debayering version. Until that decoding is done on the GPU's in the editing applications, you wont see the full res playback most likely. I wouldn't expect the Red Debayering done at the GPU's in Adobe for a while since Red is still refining it.
    Eric
    ADK

  • MATLAB and LabVIEW Communication Optimal Performance

    I have tried my own code,  searched through forums and examples to try and figure out best method to communicate between LabVIEW and MATLAB.  Most of the information I found was over a year old and I was wondering if there was a better current solution.  My goal is to work in LabVIEW to collect the data, process in MATLAB and return the results to LabVIEW.  I have encountered some difficulty in my search and before I delve even further in to one in particular, I was wondering if anybody had an optimal solution with this communication protocol or solutions to my errors encountered thus far.
    I have looked at the following methods.
    1)TCP/IP and a very good example found here: http://www.mathworks.com/matlabcentral/fileexchange/11802-matlab-tcp-ip-code-example
    When I try to adjust even the example and communicate for my own purposes I get the following errors
    Error 63 if MATLAB server not running
    Error 66 occurs if the TCP/IP connection is closed by the peer. In this case, Windows notices that no data is returned within a reasonable time, and it closes the TCP/IP connection. When LabVIEW attempts to communicate after Windows closes the connection, error 66 is the result. 
    However, the example itself works perfectly and does not get these errors
    2)Math Script Node, works but the post below states that MATLAB Node is faster.
    "computing fft of a 1024x1024 matrix ten times (attached code). Result is that Matlab node version takes 0.5s versus 1.6s for Mathscript node version."
     http://forums.ni.com/t5/LabVIEW/Why-are-mathscript-performances-much-below-matlab-s/m-p/2369760/high...
    3) MATLAB Node, which states it uses ActiveX Technology seemingly works well, but loses time for data transfer.
    4) Trying to use the ActiveX functions or if there is other Automation potential.
    5)Other solutions that I have not found that might be better suited.
    Thank you for any help or suggestions in advance. 

    Barp and Mikeporter,
    Thank you for your assistance:
    The reason I need to do the processing in matlab is as you mentioned the processing script is coming from another person who has already developed it in matlab.  I almost have to treat it as a black box.
    The TCP/IP method was interesting is that none of the errors show up when I run the example but if I try to modify it in a new VI I get the errors.
    I have attached a simple program that just has a basic butterworth low pass filter I am trying to confirm if it works in the Matlab node.  I have done other simple codes which work, and this one does not seem the implement  the appropriate filter.  The LabVIEW signal and LabVIEW filter seem to work at the default values (but not if I change sampling rate) for the Simulation of signal, Matlab signal and Matlab filter work, but the Labview signal processed in Matlab is not working...
    Ideally it would be bandpass filtered (0.1-30) at sampling rate of 256 Hz and further processed from there, but I can't even seem to get low pass to work in the matlab to labview communication.
    Any help would be greatly appreciated.  Once I have that working I will have more of an idea of the constraints of the actual processing Matlab Code I will be using.
    Thank you again.
    -cj18
    Attachments:
    labview_matlab_filter.vi ‏70 KB

  • Installing/configuring the InfoSphere Optim Performance Manager system

    I am looking at the InfoSphere Optim..... to install.  I have not used this tool before and am skeptical.  Since the Control Center is deprecated it looks like this is the replacement.  Is this tool an additional cost to license or is it available to customers who have advanced enterprise editions?
    Can this be run on a standard Windows PC or I should ask will it run well?  I would install this onto my work PC and use it for my production (EHP & Netweaver BW) systems.  Is there a quick setup guide or does this need to be installed via SAP install?
    Lastly, what is the ramp up time that it will take to obtain performance recommendations out of the tool?
    Thanks to all.
    Len Jesse.
    Knowledge is to be shared.

    Found this.  Can be closed.

  • How to split Big Applications over two HDs for optimal performance?

    Assuming both situations are using a 80gb Raptor [drive #1] and 500gb 7K500 [drive #2]:
    would it be better to ....
    a) install both the OS and applications on the raptor, while using the 500gb as the scratch?
    b) install the OS on the raptor, and have all applications installed on the 500gb drive and have the scratch on the same drive?
    how about for applications such as Final Cut Pro, which has a lot of associating files (and sound bites in the case of Audio Programs) ..... what would be the best way to split this up over those two drives? (for optimal CPU/disk performance)
    any help would be MUCH appreciated!! THANKS

    Methinks you're overworrying the problem. You should be able to install Tiger and most applications into something less than 30 GB. Everything I have, iLife, Office 2004, Quicken, Toast, TurboTax, and a slew more take up less than 10 GB. Install everything you have on the Raptor, then partition the 500 GB into two or four partitions and use them to store your data, music, movie, and photo files. Link them to your User folder's like named folders. That should satisfy your needs.

  • Optimizing performance when querying XML data

    I have a table in my database containing information about persons. The table has a xmltype column with a lot of data about that person.
    One of the things in there is a telephone number. What I now need to figure out is whether there are any duplicate phone numbers in there.
    The xml basically looks like this (simplefied example):
    <DATAGROUP>
        <PERSON>
            <BUSINESS_ID>123</BUSINESS_ID>
            <INITIALS>M.</INITIALS>
            <NAME>Testperson</NAME>
            <BIRTHDATE>1977-12-12T00:00:00</BIRTHDATE>
            <GENDER>F</GENDER>
            <TELEPHONE>
                <COUNTRYCODE>34</COUNTRYCODE>
                <AREACODE>06</AREACODE>
                <LOCALCODE>4318527235</LOCALCODE>
            </TELEPHONE>
        </PERSON>
    </DATAGROUP>
    As a result I would need the pk_id of the table with the xmltype column in it and a id that's unique for the person (the business_id that's also somewhere in the XML)
    I've conducted this query which will give me all telephone numbers and the number of times they occur.
      SELECT   OD.pk_ID,
               tel.business_id  ,
           COUNT ( * ) OVER (PARTITION BY tel.COUNTRYCODE, tel.AREACODE, tel.LOCALCODE) totalcount
           FROM   xml_data od,
           XMLTABLE ('/DATAGROUP/PERSON' PASSING OD.DATAGROUP
                     COLUMNS "COUNTRYCODE" NUMBER PATH '/PERSON/TELEPHONE/COUNTRYCODE',
                             "AREACODE" NUMBER PATH '/PERSON/TELEPHONE/AREACODE',
                             "LOCALCODE" NUMBER PATH '/PERSON/TELEPHONE/LOCALCODE',
                             "BUSINESS_ID"  NUMBER PATH '/PERSON/BUSINESS_ID'
                 ) tel
           WHERE  tel.LOCALCODE is not null --ignore persons without a tel nr
    Since I am only interested in the telephone number that occur more than once, I used the above query as a subquery:
    WITH q as (
      SELECT   OD.pk_ID,
               tel.business_id  ,
           COUNT ( * ) OVER (PARTITION BY tel.COUNTRYCODE, tel.AREACODE, tel.LOCALCODE) totalcount
           FROM   xml_data od,
           XMLTABLE ('/DATAGROUP/PERSON' PASSING OD.DATAGROUP
                     COLUMNS "COUNTRYCODE" NUMBER PATH '/PERSON/TELEPHONE/COUNTRYCODE',
                             "AREACODE" NUMBER PATH '/PERSON/TELEPHONE/AREACODE',
                             "LOCALCODE" NUMBER PATH '/PERSON/TELEPHONE/LOCALCODE',
                             "BUSINESS_ID"  NUMBER PATH '/PERSON/BUSINESS_ID'
                 ) tel
           WHERE  tel.LOCALCODE is not null) --ignore persons without a tel nr
    SELECT   OD.pk_ID,  tel.business_id
      FROM   q
    WHERE   totalcount > 1
    Now this is working and is giving me the right results, but the performance is dreadful with larger sets of data and will even go into errors like "LPX-00651 VM Stack overflow.".
    What I see is when I do a explain plan for the query is that there are things happening like "COLLECTION ITERATOR PICKLER FETCH PROCEDURE SYS.XQSEQUENCEFROMXMLTYPE" which seems to be something like a equivalent of a full table scan if I google on it.
    Any ideas how I can speed up this query? are there maybe smarter ways to do this?
    One thing to note is that the XMLTYPE data is not indexed in any way. Is there a possibility to do this? and how? I read about it in the oracle docs, but they where not very clear to me.     

    The "COLLECTION ITERATOR PICKLER FETCH" operation means that most likely the XMLType storage is BASICFILE CLOB, therefore greatly limiting the range of optimization techniques that Oracle could apply.
    You can confirm what the current storage is by looking at the table DDL, as Jason asked.
    CLOB storage is deprecated now in favor of SECUREFILE BINARY XML (the default in 11.2.0.2).
    Migrating the column to BINARY XML should give you a first significant improvement in the query.
    If the query is actually a recurring task, then it may further benefit from a structured XML index.
    Here's a small test case :
    create table xml_data nologging as
    select level as pk_id, xmlparse(document '<DATAGROUP>
        <PERSON>
            <BUSINESS_ID>'||to_char(level)||'</BUSINESS_ID>
            <INITIALS>M.</INITIALS>
            <NAME>Testperson</NAME>
            <BIRTHDATE>1977-12-12T00:00:00</BIRTHDATE>
            <GENDER>F</GENDER>
            <TELEPHONE>
                <COUNTRYCODE>34</COUNTRYCODE>
                <AREACODE>06</AREACODE>
                <LOCALCODE>'||to_char(trunc(dbms_random.value(1,10000)))||'</LOCALCODE>
            </TELEPHONE>
        </PERSON>
    </DATAGROUP>' wellformed) as datagroup
    from dual
    connect by level <= 100000 ;
    create index xml_data_sxi on xml_data (datagroup) indextype is xdb.xmlindex
    parameters (q'{
    XMLTABLE xml_data_xtab '/DATAGROUP/PERSON'
    COLUMNS countrycode number path 'TELEPHONE/COUNTRYCODE',
            areacode    number path 'TELEPHONE/AREACODE',
            localcode   number path 'TELEPHONE/LOCALCODE',
            business_id number path 'BUSINESS_ID'
    call dbms_stats.gather_table_stats(user, 'XML_DATA');
    SQL> set autotrace traceonly
    SQL> set timing on
    SQL> select pk_id
      2       , business_id
      3       , totalcount
      4  from (
      5    select t.pk_id
      6         , x.business_id
      7         , count(*) over (partition by x.countrycode, x.areacode, x.localcode) totalcount
      8    from xml_data t
      9       , xmltable(
    10           '/DATAGROUP/PERSON'
    11           passing t.datagroup
    12           columns countrycode number path 'TELEPHONE/COUNTRYCODE'
    13                 , areacode    number path 'TELEPHONE/AREACODE'
    14                 , localcode   number path 'TELEPHONE/LOCALCODE'
    15                 , business_id number path 'BUSINESS_ID'
    16         ) x
    17    where x.localcode is not null
    18  ) v
    19  where v.totalcount > 1 ;
    99998 rows selected.
    Elapsed: 00:00:03.79
    Execution Plan
    Plan hash value: 3200397756
    | Id  | Operation            | Name          | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |               |   100K|  3808K|       |  2068   (1)| 00:00:25 |
    |*  1 |  VIEW                |               |   100K|  3808K|       |  2068   (1)| 00:00:25 |
    |   2 |   WINDOW SORT        |               |   100K|  4101K|  5528K|  2068   (1)| 00:00:25 |
    |*  3 |    HASH JOIN         |               |   100K|  4101K|  2840K|   985   (1)| 00:00:12 |
    |   4 |     TABLE ACCESS FULL| XML_DATA      |   100K|  1660K|       |   533   (1)| 00:00:07 |
    |*  5 |     TABLE ACCESS FULL| XML_DATA_XTAB |   107K|  2616K|       |   123   (1)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter("V"."TOTALCOUNT">1)
       3 - access("T".ROWID="SYS_SXI_0"."RID")
       5 - filter("SYS_SXI_0"."LOCALCODE" IS NOT NULL)
    Statistics
              0  recursive calls
              1  db block gets
           2359  consistent gets
            485  physical reads
            168  redo size
        2352128  bytes sent via SQL*Net to client
          73746  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
          99998  rows processed
    If the above is still not satisfying then you can try structured storage (schema-based).

  • LR3 configuration for optimal performances

    Hi,
    I'm going to install LR3 on a newly build PC with a 150GB Velociraptor (10000rpm) for the OS (Windows 7 64b), a small SSD for virtual memory (25-30GB free space) and a 1TB 7200rpm hard drive for the photographs and other data. There is 8GB of RAM.
    Where would you suggest putting the cache and catalogue of LR3 to optimize the performances ?
    Thanks in advance for your answers.
    Adrien

    The SSD is an Intel X25-V (40GB).
    What virtual memory do you put there? The Window page file or the Photoshop scratch disk?
    I don't know what you do with your computer, but in your place I would install the system onto the SSD and switch page file off.
    Photoshop scratch disk to whataver drive, because with 8 GB RAM it should NOT need too much scratching for Photography.
    Of course, if you're doing huge thing with graphics design in Photoshop, you're requirements may be different (you might indeed nee page file and fast scratch for Photoshop.)
    EDIT: "should NOT need" instead of "should need"

  • Booting from a USB Flash Drive and Optimizing Performance

    I know that you can boot Leopard from a USB flash drive, and I have a 16GB flash drive I'm using as a bootup disk on this 2006 MacBook I have that has a broken hard drive (it was recalled, but the recall ended in 2010).
    The problem is that it's very slow. I have about 8GB of free space on it, so it can't be the VRAM being too small. The USB speed just stinks. There must be ways to optimize the OS to run off of a slow drive by accessing data as infrequently as possible. What can I do to speed this up? Should I disable hard disk sleeping? Should I use some kind of Mac tuneup/optimization tool's feature?
    I plan to get an internal hard drive for it, but I'm stuck with this for a while.

    Not all USB2 Flash drives are equal, in fact I bought 2 16 GB USB drives for cheap, as along side of them were even the same brand 16 GB Flash Drives just a different model, for 5 times as much!
    Me was thinking how bad could they be? Well, they could be 5 times worse than other USB2 flash drives... LOL on me!
    They under extremely rare occasions will Read or Write at 5 MB/s, & very often 1 MB/s...
    My Buffalo Flash drives do near USB2's top speed, see some diffs...
    http://usbspeed.nirsoft.net/
    Optimizing yours will give little benefit.
    You could also do a USB2 RAID0 setup if you have enough ports.

  • I want to use Final Cut Pro v 10 on my MacBook Pro and need advice with ram, video card or HD upgrade for optimal performance.

    I currently have 8GB ram and the stock video card and an ssd HD 240 GB.

    what Russ said…
    ext.HDD, for simple AVCHD tasks, I would recommend usb3 too…
    It's not state of the art hardware, but it's not chopped liver either.
    LOL Thank, you, Russ for phrase-of-the-day!
    <scribbling into my vocabulary book>

  • OPtimizing Performance for Select query on NAST table

    Hi All,
       We are fetching a single record from NAST table. The table has around 10 Million Entries.
       The Select Query takes around 5-6 minutes to return.
       We are not using the Primary key completely. We are using only one field of the primary key.
        The field is also a part of the Index but we are not using all the fields in the index as well.
        We need to bring down the time. What can be the solution? I cant see any changes to the code, since its a single query and we cant use the Entire Primary key.
       Would creating an Index on the fields that we are concerned with help in this regard.
       Open to all solutions.
    Thanks in Advance,
    Imran

    Hi,
    Please check this thread
    http://sap.ittoolbox.com/documents/popular-q-and-a/specifying-the-index-to-be-used-2462
    For creating another secondary index in NAST whether basis will approve for this?
    aRs

  • OPtimizing Performance for Select query on huge table

    Hi All,
       We are fetching a single record from NAST table. The table has around 10 Million Entries.
       The Select Query takes around 5-6 minutes to return.
       We are not using the Primary key completely. We are using only one field of the primary key.
        The field is also a part of the Index but we are not using all the fields in the index as well.
        We need to bring down the time. What can be the solution? I cant see any changes to the code, since its a single query and we cant use the Entire Primary key.
       Would creating an Index on the fields that we are concerned with help in this regard.
       Open to all solutions.
    Thanks in Advance,
    Imran

    There are sometimes tricks you can use to get it to use the index more efficiently. If you let us know which fields you are using in the SELECT (all of them actually), we might be able to help.
    Or are you saying you can't change the code at all?
    Please don't create duplicate posts though.
    Rob
    Message was edited by:
            Rob Burbank

  • Optimizing performance question

    I understand that the best practice for use of images is to resize, crop, and rotate images to the disired size and rotation before importing them into iBooks Author. Does that apply to a situation when you are re-using the same image several times in the project when you will be flipping or rotating that image in other sections? In other words, is it better to import the same image multiple times or to copy and past a previously imported image and use the rotate and resize commands within iBooks Author. Does this make a difference as far as size of the document and/or speed?
    Thanks!

    Your book is just a zip file - you can change the suffix, unpack it, drill down and confirm for yourself if need be
    Good luck w/your books!
    Ken

Maybe you are looking for