Another performance question

I am running Windows XP, 1.8Gz, 1GB Ram, integrated video graphics (only 16mb I think).
I installed Lightroom 1.0 and about 7000 photos. It worked just great. For various reasons I had to remove Lightroom. I installed V1.1. At the same time I moved my photos to a 5400rpm USB hard drive and re-imported my photos. My LR database and catalogue (don't know the difference) are on my c: drive.
Now rendering thumbnails takes 15 secs, viewing in development takes about 3 minutes and viewing or developing in 1:1 takes for ever. It's just unusable.
Any ideas on what I have done wrong?

My guess is also the external USB drive. Your photos should be on an internal hard drive for best performance. Even USB2 is much slower than an internal hard drive. Also uncheck Auto Write Changes to XMP if that happens to be enabled in Preferences/Catalog Settings/Metadata.
The database and catalog are the same thing. For some strange reason Adobe changed the name of the photo database to catalog in LR 1.1.

Similar Messages

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Another newb question: multiple virtual servers

    Hi, I have yet another ignorant question. I have several unrelated web projects that I am working on, and I would like to be able to set up a virtual server for each one for testing purposes, such as: http://project1, http://project2, http://project3. Can someone tell me if this is doable, and if there are any tutorials/resources on this for someone who has 0 experience running a web serer? Sorry for being so ignorant!

    Yes, it is doable.
    You can setup virtual server either by IP or by name.
    If you have one IP, and want to set them up by name (ex. http://project1, http://project2, http://project3) you can do so easily with this type of configuration:
    <virtual-server>
        <name>mydomain</name>
        <http-listener-name>http-listener-1</http-listener-name>
        <host>*.mydomain.com</host>
        <document-root>/www/domain</document-root>
      </virtual-server>
      <virtual-server>
        <name>myotherdomain</name>
        <http-listener-name>http-listener-1</http-listener-name>
        <host>*.myotherdomain.com</host>
        <document-root>/www/myotherdomain</document-root>
      </virtual-server>
    ....The important part here is that
    a) all virtual servers share the same HTTP listener
    b) which virtual server serves the request depends on the $HOST request header send by the client. Sun Web Server does the matching for you. It will match $HOST vs. the virtual server's host attribute. Depending on which site you connect to the right virtual server will be used.
    c) if the $HOST request header does not match any of the virtual servers, then the default virtual server defined in the HTTP listener will be used.
    To create a virtual server, use the Admin GUI, access the configuration, and then add new virtual server. Or use the following CLI command.
    wadm> create-virtual-server --config=myconfig --http-listener-name=http-listener-1 --document-root=/www/docs/myserver.com --host-pattern=myserver.com --log-file=../logs/myserver.com-error_logs myserverHost pattern will be used for matching. Some of this elements might be optional.
    Hope that helps. And keep the questions coming :D
    Edit: Also check the documentation
    Using Virtual Servers in SJS Web Server 7.0

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Another simple questions

    Hello friends:
    Another simple question: I need to learn things about Oracle on my desktop.
    My machine runs Windows 98. Oracle has some desktop product of its Database Line?
    For example: Oracle Personal?
    And Oracle Lite? What's the main difference between Oracle Personal and
    Oracle Lite?
    Thank You
    Gracias
    Ing. Pablo Romero
    CORDOBA ARGENTINA

    1. I didn't know the answer to your first question, but I googled it and it says the item is the in-call audio boost.
    http://forums.crackberry.com/f71/flag-icon-47659/
    2. isn't this setting determined by the carrier? So it's not a setting in the phone, but when you call in to your voicemail you can change your options?

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • OSB Proxy calling another proxy - performance question

    My client wants to standardize their logging into a common proxy logging service (LoggingService).
    The idea is that all their services can call this service to perform standard logging.
    To implement this, we created a simple proxy based on any XML. It doesn't have a real business service behind it. Instead, it's simply a proxy configured with a Log action in the request pipeline to log $body.
    Their other services call this from their request and response pipelines using the "Service Callout" Action.
    Does anyone know what the performance impact of doing this would be? When I call the LoggingService proxy via the service callout, is another network connection created or is OSB optimized to do a local request?

    wasn't familiar with the publish action vs the service callout. I just looked it up in the developers doc. Looks like it would work too. I'm just not clear what the difference is between the publish action and the service callout and when you would use on vs the other.
    compare Routing action versus Service Callout action versus Publish action?
    How do I know publish is more efficient than service callout? Enable monitoring (under Operational Settings) at action level in your proxy service having both publish and service callout. You may refer section "26.6 Viewing Service Monitoring Information"-
    http://download.oracle.com/docs/cd/E17904_01/doc.1111/e15867/monitoring_console.htm#CACDCDAH
    Regards,
    Anuj

  • Controlfile on ASM performance question

    Seeing Controlfile Enqueue performance spikes, consideration are to move control file to separater diskgroup(need outage) ? or add some disk(from different luns,( i prefer this approach) in the same disk group , seems like slow disk is casing this issue...
    2nd question :can snapshot controlfile be placed on ASM storage?

    Following points may help:
    - Separating the control file to another diskgroup may make things even worse in case that the total number of disks are insufficient in the new disk group.
    - Those control file contention issues are usually nothing to do with the storage throughput you have but the number of operations requiring different levels of exclusion on the control files.
    - Since multiple copies of controlfiles are updated concurrently a possible, sometimes, problem is that the secondary copy of controlfile is slower than the other. Please check that this is not the issue (different tiers of storage may cause such problems)
    Regards,
    Husnu Sensoy

  • Cursor Performance Questions

    I have a cursor on a complex SQL that goes against seven large tables (10M rows). When I run the SQL it returns rows quickly even when the final result set is a million rows. (So first_rows does not seem to have any impact).
    When I run it via the cursor, it can take 10-15 minutes before the cursor get the first 20K rows. I am doing a bulk collect on the cursor and inserting the rows, commiting them. Looking at the session via Toad I see its on the select statment most of the time. Why does it take 10-15 minutes to get the first 20K rows??
    As the the cursor runs, each block of 20K rows is obtained faster and faster till when the last five batches of 20K come in, the batches take only 5 - 10 secs each.
    So I have two questions:
    Why is the cursor so slow to get started, while the SQL shows results within seconds.
    Once started why does it appear to speed up towards the end.
    I have seen this happen many times ( > 20) so its not a one off occurrence.
    I am on Oracle 10.2.0.2
    R

    When you "run the SQL", are you talking about executing the SQL statement in a GUI that returns the first, say, 50 rows (the SQL Developer default), and then waits to fetch the next 50 rows? If so, are you sure that you are measuring things accurately? If, for example, it take 5 seconds to return the first 50 rows, it might reasonably take 400 times as long (2000 seconds or 33.3 minutes) to fetch 20,000 rows. I'm wildly guessing at numbers here, of course, but it would be helpful if you could provide this sort of back of the envelope calculation.
    Are you getting the same query plan when you run the SQL statement directly and when the SQL statement is executed in your PL/SQL block (I'm guessing that's what you mean by "via the cursor"-- every SQL statement opens a cursor, so you're always fetching from a cursor)?
    Is it possible that the query plan and the physical ordering of data on disk has an impact? If you are doing a full table scan, for example, and it so happens that there are 50 rows at the beginning of the table that match your criteria, then you scan millions of blocks to get the next 50 rows that match the criteria, returning the first 50 rows will be very fast while returning the second set of 50 rows will be much slower.
    And, since you are concerned about performance, is there a reason that you are using PL/SQL at all? If you're simply selecting data from one table and inserting it into another, it will be more efficient to do that in a single SQL statement rather than resorting to PL/SQL collections.
    Justin

  • Performance question on looping thrue blocks and items (forms 10.1.2.3)

    Hi all,
    I'm back again in Forms forum : ) !!! and I'm working on a new and very interesting project
    version used : Forms [32 bits] Version 10.1.2.3.0 (Production)
    A little question for gurus :
    On former projects I used to call loops on blocks and item like shown below to do various things such as displaying buttons or showing canvas or different VA depending on the user or scenarios .
    PROCEDURE FRM_BLK_ITM_LOOP IS
    v_curblk varchar2(90); -- bloc courant
    v_curitm varchar2(90); -- item courant
    BEGIN
      v_curblk := get_form_property(:SYSTEM.CURRENT_FORM,first_block); -- on récupère le 1er block de la form
      LOOP
      v_curitm := v_curblk||'.'||get_block_property(v_curblk,first_item); -- on récupère le 1er item du block
        WHILE v_curitm != v_curblk||'.'||get_block_property(v_curblk,last_item)
         LOOP -- tant que l'item n'est pas le dernier du block on loop
            v_curitm :=  v_curblk||'.'||get_item_property(v_curitm,nextitem); -- on récupère l item suivant
            if get_item_property(v_curitm,<some property>) = 'TRUE' then
              --- I can do something.... or adding more conditions if then etc...
            end if;
        END LOOP;
      EXIT WHEN v_curblk = get_form_property(:SYSTEM.CURRENT_FORM,last_block); -- on sort losrqu on arrive au dernier block
      v_curblk := get_block_property(v_curblk, nextblock); -- on passe au block suivant
      END LOOP;
    END;In my current project we work on quite huge forms which can have a consequent number of blocks and items.
    And we must be very careful regarding performance issues as these forms are accessed via LAN and WAN.
    So my question :
    This method seems to be quite efficient as it goes thrue blocks and items sequences as they are defined in the builder comparing to go_block -> go_item ->do_something which can easily turn into nightmare-programming.
    But I don't really know about network roundtrips with this kind of method.
    Is everything done in the app server and then fetched to the client?
    What triggers block-level and item-level can be fired during the execution of the loop ? and so one...
    Thanks in advance for your advices on this matter.
    Jean-Yves

    Hmmm, I have to say I never bothered if Forms is in Socket mode or not; I enabled the network statistics, counted the roundtrips and looked for ways to get them lower (my old friend wireshark did also a good job regarding this) ;). But regarding the note Forms 6i uses Socket Connections by default, this might apply to 10g too (or the enhancement request was approved, who knows).
    Frankly I am not entirely sure what Socket Mode means; I guess it's the mode the forms applet talks to the forms runtime; wheter it's stateful (via Sockets) or stateless (via HTTP / HTTPS) but this is just a wild guess, and I can't find informations on it quickly. I also enabled networkStats on my Developer Suite only, so I cannot tell if you can enable them on a full-fledged Application Server.
    Anyway; as said I just counted the roundtrips and looked where I can avoid them when I made our application ready for WAN.
    Another useful tool was Shunra VE Desktop which I used to simulate low bandwith networks with high latencys; I installed it on a virtual XP, started the application and tested how the Application performs. If something looked odd, I looked behind the scenes, built a little testform basing on the code behind and tried out various things; very often you can take advantage of the event bundling forms seems to make when you use several set_xyz calls as Francois also noted; e.g.
    set_custom_property('bean_item', 1, 'PROPERTY', prop);
    set_custom_property('bean_item', 1, 'PROPERTY', prop);
    set_custom_property('bean_item', 1, 'PROPERTY', prop);
    set_custom_property('bean_item', 1, 'PROPERTY', prop);
    set_custom_property('bean_item', 1, 'PROPERTY', prop);
    vRet := get_custom_property('bean_item', 1, 'PROPERTY);most certainly will cause just 1 roundtrip; but if you use get_custom_property in the middle of the set_custom_property calls you will encounter 2 roundtrips as forms needs to synchronize (you get a value from the bean so the forms runtime needs a response) with the forms applet whereas set_custom_property is a one-way-street which can be fired off simultaneous. The same applies to fbean.invoke and fbean.invoke_bool, fbean.invoke_char and the like. Of course if you are using more then one get_custom_property in this case the roundtrips will increase accordingly.
    If you want to make use of event bundling make sure you fire off as much set_xyz as you can before forcing forms to synchronize (e.g. with get_xyz, or synchronize, create_timer,...)
    cheers

  • STATSPACK Performance Question / Discrepancy

    I'm trying to troubleshoot a performance issue and I'm having trouble interpreting the STATSPACK report. It seems like the STATSPACK report is missing information that I expect to be there. I'll explain below.
    Header
    STATSPACK report for
    Database    DB Id    Instance     Inst Num  Startup Time   Release     RAC
    ~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
              2636235846 testdb              1 30-Jan-11 16:10 11.2.0.2.0  NO
    Host Name             Platform                CPUs Cores Sockets   Memory (G)
    ~~~~ ---------------- ---------------------- ----- ----- ------- ------------
         TEST             Microsoft Windows IA (     4     2       0          3.4
    Snapshot       Snap Id     Snap Time      Sessions Curs/Sess Comment
    ~~~~~~~~    ---------- ------------------ -------- --------- ------------------
    Begin Snap:       3427 01-Feb-11 06:40:00       65       4.4
      End Snap:       3428 01-Feb-11 07:00:00       66       4.1
       Elapsed:      20.00 (mins) Av Act Sess:       7.3
       DB time:     146.39 (mins)      DB CPU:       8.27 (mins)
    Cache Sizes            Begin        End
    ~~~~~~~~~~~       ---------- ----------
        Buffer Cache:       192M       176M   Std Block Size:         8K
         Shared Pool:       396M       412M       Log Buffer:    10,848K
    Load Profile              Per Second    Per Transaction    Per Exec    Per Call
    ~~~~~~~~~~~~      ------------------  ----------------- ----------- -----------
          DB time(s):                7.3                2.0        0.06        0.04
           DB CPU(s):                0.4                0.1        0.00        0.00
           Redo size:            6,366.0            1,722.1
       Logical reads:            1,114.6              301.5
       Block changes:               35.8                9.7
      Physical reads:               44.9               12.1
    Physical writes:                1.5                0.4
          User calls:              192.2               52.0
              Parses:              101.5               27.5
         Hard parses:                3.6                1.0
    W/A MB processed:                0.1                0.0
              Logons:                0.1                0.0
            Executes:              115.1               31.1
           Rollbacks:                0.0                0.0
        Transactions:                3.7As you can see a significant amount of time was spent in database calls (DB Time) with relatively little time on CPU (DB CPU). Initially that made me think there were some significant wait events.
    Top 5 Timed Events                                                    Avg %Total
    ~~~~~~~~~~~~~~~~~~                                                   wait   Call
    Event                                            Waits    Time (s)   (ms)   Time
    log file sequential read                        48,166         681     14    7.9
    CPU time                                                       484           5.6
    db file sequential read                         35,357         205      6    2.4
    control file sequential read                    50,747          23      0     .3
    Disk file operations I/O                        16,518          18      1     .2
              -------------------------------------------------------------However, looking at the Top 5 Timed Events I don't see anything out of the ordinary given my normal operations. the log file sequential read may be a little slow but it doesn't make up a significant portion of the execution time.
    Based on an Excel/VB spreadsheet I wrote, which converts STATSPACK data to graphical form, I suspected that there was a wait event not listed here. So I decided to query the data directly. Here is the query and result.
    SQL> SELECT wait_class
      2       , event
      3       , delta/POWER(10,6) AS delta_sec
      4  FROM
      5  (
      6          SELECT syev.snap_id
      7               , evna.wait_class
      8               , syev.event
      9               , syev.time_waited_micro
    10               , syev.time_waited_micro - LAG(syev.time_waited_micro) OVER (PARTITION BY syev.event ORDER BY syev.snap_id) AS delta
    11          FROM   perfstat.stats$system_event syev
    12          JOIN   v$event_name                evna  ON  evna.name     = syev.event
    13          WHERE  syev.snap_id IN (3427,3428)
    14  )
    15  WHERE delta > 0
    16  ORDER BY delta DESC
    17  ;
    ?WAIT_CLASS               EVENT                                                                        DELTA_SEC
    Idle                      SQL*Net message from client                                                  21169.742
    Idle                      rdbms ipc message                                                            19708.390
    Application               enq: TM - contention                                                       7199.819
    Idle                      Space Manager: slave idle wait                                             3001.719
    Idle                      DIAG idle wait                                                             2382.943
    Idle                      jobq slave wait                                                            1258.829
    Idle                      smon timer                                                                 1220.902
    Idle                      Streams AQ: qmn coordinator idle wait                                      1204.648
    Idle                      Streams AQ: qmn slave idle wait                                            1204.637
    Idle                      pmon timer                                                                 1197.898
    Idle                      Streams AQ: waiting for messages in the queue                              1197.484
    Idle                      Streams AQ: waiting for time management or cleanup tasks                    791.803
    System I/O                log file sequential read                                                    681.444
    User I/O                  db file sequential read                                                     204.721
    System I/O                control file sequential read                                                 23.168
    User I/O                  Disk file operations I/O                                                     17.737
    User I/O                  db file parallel read                                                        14.536
    System I/O                log file parallel write                                                       7.618
    Commit                    log file sync                                                                 7.150
    User I/O                  db file scattered read                                                        3.488
    Idle                      SGA: MMAN sleep for component shrink                                          2.461
    User I/O                  direct path read                                                              1.621
    Other                     process diagnostic dump                                                       1.418
    ... snip ...So based on the above it looks like there was a significant amount of time spent in enq: TM - contention
    Question 1
    Why does this wait event not show up in the Top 5 Timed Events section? Note that this wait event is also not listed in any of the other wait events sections either.
    Moving on, I decided to look at the Time Model Statistics
    Time Model System Stats  DB/Inst: testdb  /testdb    Snaps: 3427-3428
    -> Ordered by % of DB time desc, Statistic name
    Statistic                                       Time (s) % DB time
    sql execute elapsed time                         8,731.0      99.4
    PL/SQL execution elapsed time                    1,201.1      13.7
    DB CPU                                             496.3       5.7
    parse time elapsed                                  26.4        .3
    hard parse elapsed time                             21.1        .2
    PL/SQL compilation elapsed time                      2.8        .0
    connection management call elapsed                   0.6        .0
    hard parse (bind mismatch) elapsed                   0.5        .0
    hard parse (sharing criteria) elaps                  0.5        .0
    failed parse elapsed time                            0.0        .0
    repeated bind elapsed time                           0.0        .0
    sequence load elapsed time                           0.0        .0
    DB time                                          8,783.2
    background elapsed time                             87.1
    background cpu time                                  2.4Great, so it looks like I spent >99% of DB Time in SQL calls. I decided to scroll to the SQL ordered by Elapsed time section. The header information surprised me.
    SQL ordered by Elapsed time for DB: testdb    Instance: testdb    Snaps: 3427 -3
    -> Total DB Time (s):           8,783
    -> Captured SQL accounts for    4.1% of Total DB Time
    -> SQL reported below exceeded  1.0% of Total DB TimeIf I'm spending > 99% of my time in SQL, I would have expected the captured % to be higher.
    Question 2
    Am I correct in assuming that a long running SQL that started before the first snap and is still running at the end of the second snap would not display in this section?
    Question 3
    Would that answer my wait event question above? Ala, are wait events not reported until the action that is waiting (execution of a SQL statement for example) is complete?
    So I looked a few snaps past what I have posted here. I still haven't determined why the enq: TM - contention wait is not displayed anywhere in the STATSPACK reports. I did end up finding an interesting PL/SQL block that may have been causing the issues. Here is the SQL ordered by Elapsed time for a snapshot that was taken an hour after the one I posted.
    SQL ordered by Elapsed time for DB: testdb    Instance: testdb    Snaps: 3431 -3
    -> Total DB Time (s):           1,088
    -> Captured SQL accounts for ######% of Total DB Time
    -> SQL reported below exceeded  1.0% of Total DB Time
      Elapsed                Elap per            CPU                        Old
      Time (s)   Executions  Exec (s)  %Total   Time (s)  Physical Reads Hash Value
      26492.65           29     913.54 ######    1539.34             480 1013630726
    Module: OEM.CacheModeWaitPool
    BEGIN EMDW_LOG.set_context(MGMT_JOB_ENGINE.MODULE_NAME, :1); BEG
    IN MGMT_JOB_ENGINE.process_wait_step(:2);END; EMDW_LOG.set_conte
    xt; END;I'm still not sure if this is the problem child or not.
    I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
    Thanks!

    Centinul wrote:
    I'm still not sure if this is the problem child or not.
    I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
    I think you've attacked the problem well.
    It has prompted me to take a little look at what's going on, running 11.1.0.6 in my case, and something IS broken.
    The key predicate in statspack for reporting top 5 is:
                      and e.total_waits         > nvl(b.total_waits,0)In other words, an event gets reported if total_waits increased across the period.
    So I've been taking snapshots of v$system_event and looking at 10046 trace files at level 8. The basic test was as simple as:
    <ul>
    Session 1: lock table t1 in exclusive mode
    Session 2: lock table t1 in exclusive mode
    </ul>
    About three seconds after session 2 started to wait, v$system_event incremented total_waits (for the "enq: TM - contention" event). When I committed in session 1 the total_waits figure did not change.
    Now do this after waiting across a snapshot:
    We start to wait, after three seconds we record a wait, a few minutes later perfstat takes a snapshot.
    30 minutes later "session 1" commits and our wait ends, but we do not increment total_waits, but we record 30+ minutes wait time.
    30 minutes later perfstat takes another snapshot
    The total_waits has not changed between the start and end snapshot even though we have added 30 minutes to the "enq: TM - contention" in the interim.
    The statspack report loses our 30 minutes from the Top N.
    It's a bug - raise an SR.
    Edit: The AWR will have the same problem, of course.
    Regards
    Jonathan Lewis
    Edited by: Jonathan Lewis on Feb 1, 2011 7:07 PM

  • Copying arrays, performance questions

    Hello there
    The JDK offers several ways to copy arrays so I ran some experiments to try and find out which would be the fastest.
    I was measuring the time it takes to copy large arrays of integers. I wrote a program that allocates arrays of various sizes, and copy them several times using different methods. Then I measured the time each method took using the NetBeans profiler and calculated the frequencies.
    Here are the results I obtained (click for full size):  http://i.share.pho.to/dc40172a_l.png
    (what I call in-place copy is just iterating through the array with a for loop and copying the values one by one)
    I generated a graph from those values:  http://i.share.pho.to/049e0f73_l.png
    A zoom on the interesting part: http://i.share.pho.to/a9e9a6a4_l.png
    According to these results, clone() becomes faster at some point (not sure why). I've re-ran these experiments a few times and it seems to always happen somewhere between 725 and 750.
    Now here are my questions:
    - Is what I did a valid and reliable way to test performances, or are my results completely irrelevant? And if it's not, what would be a smarter way to do this?
    - Will clone be faster than arraycopy past 750 items on any PC or will these results be influences by other factors?
    - Is there a way to write a method that would copy the array with optimal performances using clone and arraycopy, such that the cost of using it would be insignificant compared to systematically using one method over the other?
    - Any idea why clone() can become faster for bigger arrays? I know arraycopy is a native method, I didn't try to look into what it does exactly but I can't imagine it's doing anything more complicating than copying elements from one location in the memory to another... How can another method be faster than that?
    (just reminding I'm copying primitives, not objects)
    Thanks!
    Message was edited by: xStardust! Added links, mr forum decided to take away my images

    yeh, everyone thinks that at some point. it relies,
    however, on you being perfect and knowing everything
    in advance, which you aren't, and don't (no offence,
    the same applies to all of us!). time and time again,
    people do this up-front and discover that what they
    thought would be a bottleneck, isn't. plus,
    the JVM is much smarter at optimizing code than you
    think: trust it. the best way to get good performance
    out of your code is to write simple, straightforward
    good OO code. JVMs are at a point now where they can
    optimize java to outperform equivalent C/C++ code
    (no, really) but since they're written by human
    beings, who have real deadlines and targets, the
    optimizations that make it into a release are the
    most common ones. just write your application, and
    then see how it performs. trust me on this
    have a read of
    [url=http://java.sun.com/developer/technicalArticles/I
    nterviews/goetz_qa.html]this for more info anda chance to see where I plagiarized that post from :-)
    Thanks for that link you gave me :)
    Was usefull to read.
    About time and money of programming, that is not really an issue for me atm since i'm doing this project for a company, but through school (it's like working but not for money).
    Of course it should not last entirely long but I got time to figure out alot of things.
    For my next project I will try to focus some more on building first, optimizing performance later (if it can be done with a good margin, since it seems the biggest bottlenecks are not the code but things outside the code).
    @promethuuzz
    The idea was to put collection objects (an object that handles the orm objects initialized) in the request and pass them along to the jsp (this is all done through a customized mvc model).
    So I wanted to see if this method was performance heavy so I won't end up writing the entire app and finding out halve of it is very performance heavy :)

  • Re: another build question! (sorry lol)

    Hi all
    right, after days and days of researching all the excellent articles on here ive had to write a post! So apologies for going over old ground
    Im going to build a new CS5.5 rig (having had a guts full of Apple and their FCPX fiasco its back to pc!)
    so although i appreciate the 990x o'c is prob best option, bang for  buck is leading me down the sandybridge i7 2600k o'c option on p67 mobo with 16gb of ram (option to take to 32 down the line when the chips are out)
    anyway, im sorted on chip mobo (msi big bang marshal p67), nivida 570 etc , its the drives im struggling on!! Im edit avchd video and some after effects, small amount of 3d, and in FCP i always transcoded everything to prores. Now on cs5.5 it looks like real time performance is possible with high end hardwear.
    So mobo, ram, and chip aside, my HDD config im unsure on, ive not really got the cash to go crazy with raid controllors etc, but understand need for seperate drives, etc and tbh might even go downt he cineform route as a prores alternative (prob avoiding hassles of drive speed with avchd)
    im thinking as the SSD's are now dropping in price and the ocz are producing these 500 mb r/w speed sata 6 120gb drives for a reasonable price, would 3 of these drives (one for os, one for media , one for scracth disk ) be a good set up or is it a waste of cash and should i raid 0 from bios/mobo?
    I appreicate that 120 gb drives for media etc are small, but i would take project media from another much large backup drive and just use the 3x ssd set up as working disks for editing & os? once project over, clear out drives to larger back up and start new project!
    Its either that or i go SSD as bootdrive, but some sort of cheap raid set up for my scratch disk, media drives? prob is if i do that, from what ive read (brain dead now) i would be best off with 2 x raid 0 as scratch disk and orginal media respectively when workign with avchd
    HOWEVER, on my mobo there are only 4 sata 6 ports, so if i use one for ssd boot drive, and then im left with 3 x sata 6 ports and another 4 x sata 3 ports to raid on? how does this work? any point in getting the sata 6 drives as one would be stripped with a sata 6 drive plugged into a sata 3 port (this was my reasoning behind using 3 small sata 6 ssd's plugged to the sata 6 ports  and rest of sata 3 ports as storage and backups!
    confused lol!! I just want the overclocked sandybridge system with decent gpu card, as much ram as possible at present, but im thinking my bottleneck will be in the HDD config! any suggestions are much appreicated! im not that techy so whilst have read all the articles am more confused now (plus normally a mac user, so its out of the box configuration usually!) for what its worth looking to purchase something from scan uk in terms of parts! total build cost including a reasonable screen £2,000
    many thanks

    You have a limited budget, especially in the UK, but then don't we all?
    Going for the 980X will triple the cost of the CPU, but even when editing AVCHD material the gains are not sizable enough IMO to justify that cost differential. Add to that you will need 24 GB instead of 16 GB and that carries an additional price tag. Both factors will easily move you out of budget range if you want to have a number of disks and possibly a raid controller.
    Did you read my article To Raid or not to Raid, that is the question. It can be found under the Overview tab at the top of the page? (Currently responding from abroad on my notebook and not having the bookmarks available for easy linking). http://forums.adobe.com/thread/525263
    With media and projects I would advise against a raid0, because of the lack of redundancy. For pagefile, media cache and previews (scratch disks) raid0 is quite OK. They will be recreated if needed. The performance gain from a raid0 for media and projects over a parity raid is easily offset by the time spent on making backups. For parity raids do not use WD Caviar Blacks, but look at the Hitachi 7K3000 line of disks.
    The question of Sandy Bridge versus the old X58 platform is essentially one of 'which limitations are acceptable to me'.
    The Sandy Bridge is a great processor and at least the equal to the old i7-9xx quad cores. However, the platform, the chipset, has its shortcomings in terms of PCIe lanes. Whether that is relevant to you, only you can decide. But hey, we would be in serious trouble if Intel did not manage some progress in two years time from the i7-920 to the i7-2600K. So of course the i7-2600K shows much more potential than the almost retired 920, it is the chipset for the Sandy Bridge that is 'flawed' in comparison to the X58, but that is no surprise, since the Sandy Bridge is a 'middle-of-the-road' platform and the X58 was a 'high performance' platform.
    BFTB-wise I think that within your budget limits, you should look at the i7-2600K, but with the best disk setup you can afford.

  • ANOTHER formats question...

    I've been using Streamclip to convert mpeg2's to Quicktime. I've experimented with Apple MPEG4 and Sorenson 3. Not the jpeg format so much.
    My question: which is the best format for keeping the file size about the same and losing the least video quality? These files are for multiple purposes. Some will be used on the internet and others will find their way onto dvds.
    Is the option from streamclip to DV good for Final Cut and Quicktime?
    I realize the short answer to my question is that it totally depends on what I want the file for. But what I'm really looking for is a primer on the diff between Sorenson, MP4, JPEG-A(B)....etc.
    I'm moving toward using h.264 for everything, but not everyone has qt7 - you know?

    My question: which is the best format for keeping the file size about the same and losing the least video quality? These files are for multiple purposes. Some will be used on the internet and others will find their way onto dvds.
    Everybody has their own picks. However, you might also want to consider the fact that not everybody has upgraded to latest QuickTime Player.
    My general pick would be Sorenson Video 3 for general purposes. Sorenson Video 3 performed better (in terms of quality) then MPEG 4 when you limit the bitrate to some small number (i.e. small filesize). As for DVDs, you might want to export in another format for it.
    Is the option from streamclip to DV good for Final Cut and Quicktime?
    It is, assume the original file is good quality.
    But what I'm really looking for is a primer on the diff between Sorenson, MP4, JPEG-A(B)....etc.
    I made this page awhile back, you might find it useful.
    http://mac.sillydog.org/qt/compare.php
    Hope this helps.
    <small>disclaimer: I am not selling any products, but there are Google Ads in that page.</small>

Maybe you are looking for