Poor performance of 6630M in portrait mode

I've upgraded my Mac Mini 2009 with Nvidia 9400 to Mac Mini 2011 with Radeon 6630M.
Everything is fine in the landscape display mode, however I normally have it plugged to Dell 24" display in portrait mode (using the bundled HDMI-DVI adapter).
When in portrait mode (90 degrees rotation selected in the display preferences), graphics are choppy. I can notice it in games (Zen Pinball 2, Pinball Arcade) and even when scrolling pages in Chrome. Games are not playable at all, flickering and jitter when scrolling in Chrome is annoying.
It was not a problem on Mac Mini 2009, so I gues it's a bug in the video driver for Radeon 6630M. I'm running OS X 10.8 with all the updates installed.
Any help would be appreciated.

I'll also submit feedback to Apple as Portrait mode is a little choppy for me on my Mid-2010 Mac Mini with Nvidia Geforce 320M Graphics. Hopefully it's just an OS / driver issue running Mountain Lion 10.8.3.
Please, whoever has similar issues, please submit feedback to Apple. Thanks!
http://www.apple.com/feedback/macosx.html

Similar Messages

  • Terrible performance on 2nd monitor in portrait mode

    I have the newest model 27" iMac. I have a second monitor plugged in via mini displayport. Everything was fine until  Mountain Lion upgrade. Now, the second monitor, when in portrait mode, stutters very badly when I swipe between desktops. It is very annoying. It is fine when in landscape mode, but I use it in portrait mode and it's terrible.

    You need to submit the bug to Apple, I believe some people have complained of this during testing from what I have seen on some apple fan sites.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • After Effects CS5.5 - Poor performance

    The tech info:
    After Effects 5.5 Version 10.5.1.2
    Windows 7 x64 SP1
    The problem persists when no video is present
    No error messages are being shown
    Atempting to "Play" in preview or attempting to "Ram Preview" will show the larges problems, seeking in the timeline causes the entire program to hang as well
    Not on this computer (My computer specs below)
    Chrome, Skype, Steam, Taskmanager (closing all of these does not change the problem)
    No thrid party codecs are installed
    My computer is as follows:
    Core i7 920 (Bloomfiled) OC 3.2ghz (4 cores, 8 logical threads)
    XFX 7970 3GB
    15GB DDR3 1333mhz (OC 1666)
    120gb SSD (280mB/ps read | 240mB/ps writes @ 4k) 25Gb is free
    Zero external drives are conected
    I am using no 3rd party IO hardware of any kind
    The problem persists when OpenGL is on or off
    The problem persisits when I am using "Render Multiple frames" and when it is not enabled
    My problem is that when I set up a project at 1920x1080 and just ask it to play a black screen I get ~12fps Not Realtime if I set the "preview quality" down to 1/2 I get ~10fps if I set the preview quality down to 1.5% I get ~8fps. While this is occuring my CPU usage is hard at 12% (one logical CPU pegged). When seeking the entire program will hang for 2 to 3 seconds while it grabs and "renders" the next empty frame, adding effects and clips does not change this performance in any way, I can have several clips placed in the timeline and performance persists at its constant 12fps regardless.
    If anyone has any ideas Im open, Id love to get after effects working but in this state I spend 75% of my time waiting for the program to "start responding" after asking it to seek.

    There is almost no overhead in "rotating" the monitors, thats all native to the GPU. There is no performance difference between 5760x1080 and 3240x1920. If there is any overhead its going to be in the de-stiching of the backbuffers. But this problem persisits even when I am not in an eyefinity display group. Double checked by rotating out of portrait mode as well same performance issue. Figured this was stupid and disabled my other 2 monitors and checked just by using one, same problem no framerate change whatsoever. Thought it my be because of HDMI, nope same thing on DVI.
    I dont know what kind of computer still has issues with simple translations of pixels, if this were the early two thousands I could understand where your coming from but its not so I am confused.
    Also if it were having trouble with "translating" the pixels 2 things would have happened and below is why I believe that is not possible.
    1. Because after effects is running on a different process the "overhead" that you speak of would be handled by the other 7 logical threads or the 88% of the processor that is sitting free.
    2. If the "translation" was not handled by the CPU and was GPU bound, that means the bottleneck would have to occur within the GPUs pixelshader array or vertex processor, both of witch are sitting clocked down and not under load. When openGL / DirextX / OpenCL is used the GPU will clock up and become under load, at that point the bottleneck could occur. But this is not the case.
    3. After effects has no knowelege of a "rotated" or "eyefinity" display, in windows, the backbuffer is handled the same as any other rotation of the monitor as a simple 2D array of pixels.
    I only wish it were so simple as using just 1 monitor.
    Any other ideas?

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Poor performance on reports that were migrated from 6i to 10g

    We are migrating from 6i client server to 10g reports server and getting poor performance on various reports. Reports that work in seconds in 6i are taking much longer to run or even timing out.
    Reports Server:
    Version 10.1.2.0.2
    initEngine = 1
    maxEngine = 20
    minEngine = 1
    engLife = 1
    engLife = 1
    maxIdle = 30
    The reports are being called from 10g forms with the following:
    T_repstr := '../reports/rwservlet?server=rep_aporaapp_frhome1'
    || '&report='|| T_prog_name
    || '&userid='|| T_nds_uid;
    || '&destype=cache'
    || '&paramform=yes'
    || '&mode=Default'
    || '&desformat=pdf'
    ||' orientation=Landscape';
    web.show_document(T_repstr,'_blank');

    Using these and not hearing much bad
    Init Engine 1
    Max Engine 6
    Min Engine 0
    Eng Life 10
    MaxIdle 30
    Trace Error
    Trace Replace
    I set my Report Server Parameters
    CACHE SIZE - 700
    CACHE DIRECTORY = (you have to decide)
    IDLE timeout 120
    Max Connections 120
    Max Queue Size 4000
    trace options = trace_err
    trace mode trace_replace

  • ITunes 5 poor performance in mini player view

    When I switch the iTunes 5 interface to the "mini player" view (green plus button), iTunes consumes ~65% CPU time continuously. This isn't the case when iTunes is minimzed, and when the entire interface is visible it only takes ~15% CPU time continuously. This wasn't the case with iTunes 4.x, in which the mini player mode didn't appear to consume any more CPU time than any other view mode. It's terribly annoying since it heats the computer significantly and forces my laptop's fan to spin nonstop.
    I can only attribute this behavior to the new "skin," but is anyone else seeing poor performance from iTunes 5 in the mini player view?

    No, I just downloaded iTunes 5 a few days ago and I don't have any add-ons. I should clarify that this is on my 867MHz Powerbook G4.
    I've been observing the CPU usage some more, and it seems to take a huge hit with songs/labels that are very long; i.e.: ones that scroll sideways across the yellow display on the second line. I've noticed this behavior with Podcasts--the Podcast name will be short and will not scroll, then it's description will roll up and will be long enough to require sideways scrolling. The poor performance comes only whin scrolling titles are being displayed, consuming 45-65% CPU.
    Playing music with short artist and album names consumes 10-15% CPU continuously, with spikes to 35% when the artist/album rolls upwards on the yellow display window. If sideways scrolling is displayed, again CPU usage stays high.
    If iTunes 5 is in mini player mode and not playing anything (only the Mac logo in the yellow window), then continuous CPU usage is <1%.
    It seems to me that text animation in the GUI has taken a huge hit with the upgrade to iTunes 5.

  • How can I return maps to portrait mode in iOS7 ?

    While on my way to meeting today and iPhone maps doing it's usual poor job of being a map, it somehow turned to landscape mode.  How do you switch it back to portrait mode?

    are you trying to play audio/music on both the SBH52 and the loud speaker?
    "I'd rather be hated for who I am, than loved for who I am not." Kurt Cobain (1967-1994)

  • BI4 - Poor Performance

    Hi,
    I have a problem updating several reports which either time out or take ages to respond when making simple changes. It doesn't seem to matter which mode you are in while making a change (whether in data or structure mode) nor the tool used (Webi or Rich Client).
    I have read in other forums users reporting similiar problems which results in an error unable to get the first page of the report.
    To ensure it wasn't an issue inherited from a previous version of BO (as these reports were originally written in BOXI BI 3.1) I recreated the report from scratch only to hit the same issue when populating the report with various formula.
    When this occurs (i.e. the unable to get the first page of the report error occurring) I am forced to save then close the Rich Client and then have to re-open the file each and every time.
    We are currently using Business Objects BI4.0 SP6 Edge. These reports consist of some 600+ variables however they never caused this issue in the older version of BO.
    Please can someone suggest a solution to this issue as it is taking me much longer to make simple updates to these reports when it ought to be straight forward.
    Cheers
    Ian

    Hi Farzana,
    Thanks for your response. Yes, I had read this on a variety of forums and due to the poor performance wrote the report from scratch again.
    Firstly, I got the structure of the report and saved it as a template. No issues. Then add in Queries and variables. No issues. It was only when I had populated the report with the formula / calculations (after about my half way point) I started to detect performance issues again.
    This forced my hand and I used RestFUL Web Service calls to complete the rest otherwise it would of been a painful excercise. The report contains some 600+ variables as 750+ cells populated with formula calculations so it is a busy report.
    I would of thought others with complex reports may have reported similiar performance issues as this worked fine in our old BOXI v3.1 environment.

  • Adobe Reader XI Shows Poor Performance.

    Hi there!
    I've been using acrobat for a few years now , but the Acrobat Reader XI , its showing poor performance rendering a Big PDF file Like 2.8MB (its a MS Proyect Exported Document)..and I have tested it with another PDF viewers and it works fine. Any AR Bug maybe. FYI it complete freezes the Computer. (Im using a Core i3 W7 Pro x64 bits , 4GB , so the PC is not the problem) . I have tested in  2 pcs and i got the same results.

    I just posted the below on another discussion.  But read below and give it a try.
    I am running Windows 7 64-bit with Adobe Reader 11.0.3 and am experiencing the same issue. Opening PDFs takes a long time and there is a delay and it freezes for a period of time and then it will unfreeze and becomes available.
    What I have found to be the issue in my case is that Protected Mode is enabled. If you open Adobe Reader and click on Edit and Preferences then go to the section Security (Enhanced). You can turn "Enable Protected Mode at Startup" off by unchecking the check box.
    After that give it a try and see if that doesn't clear up the issue.
    I hope Adobe provides and update or new release to address this issue as it seems to be a problem with quite a lot of folks. Not sure what is causing it but it shouldn't be that way. Our users are chomping at the bit to turn it off but I am telling them not to for now as I hope there is an update soon to fix and address the problem.
    And don't tell me to turn it off and keep it off either as that is NOT a solution.
    Anyway - hope it helps.

  • IPhone 5 stuck in portrait mode after upgrade to iOS 8

    iPhone 5 stuck in portrait mode after upgrade to iOS 8.
    Phone was fine before the upgrade, iOS 8.1 did not fix the issue.
    Have done a factory reset at Genius bar and they say it's a hardware issue and have gotten this response below:
    "Now, when you perform an update to any iOS device many system checks are run to verify that all components are working as they should. This is part of the firmware update process. When these checks are run sometimes issues are detected, even if all seemed to be running well previously. When this happens one of two things can occur. Either the hardware will be deactivated completely, or it will be locked in a state of use so that the device can still be used. In the case of the gyroscope for screen rotation, the firmware locks in place so that while not perfect, the device can still remain on some functioning level. This is what appears to have happened when you updated. The software itself did not cause any failure, but it detected a currently upcoming issue and prevented it from becoming worse."
    All I want is a replacement but they want me to pay £199 since it's out of warranty.
    Anyone have the same issue?

    If the device is out of warranty you must pay the out of warranty replacement fee to have your iPhone replaced. If the phone was still under warranty the phone would have been replaced at no charge.

  • Portrait mode format for photos in iPhoto books?

    Attenton Apple Execs (Jonathan Ive in particular) - How about creating at least ONE Book format that incorporates portrait mode (vertical) photos well?  This question was put out to Apple in discussions in 2010 and archived.  In 2014 - still no such format!
    Where is Apple's "user experience" crew when we need them?
    You guys, with iPhones and iPads, have moved the world to a much more portrait mode still and video orientation. 
    Can we please, please, please .... have an accomodating Book Format in iPhoto? I know you are really proud of the face recognition and other sophisticated featurs in recent editions of iPhoto, but we poor users are still without a basic book capability.
    Ideally, all the existing formats could be expanded with a few pages to accomodate vertical images better.
    Wow us....with "one more thing!"

    Sorry to inform you but Jonathan Ive is not here - no Apple exects are here. In face Apple is not here. As you are aware this is totally a user to user assistance forum manned only by volenteer users
    If you want to contact Apple use the contact us link at the bottom of every page of these forums - or the iPhoto menu ==> Provide iPhoto feedback
    LN

  • MSI GE70 2OE-015NL very poor performance on battery power

    Hey guys,
    First of all I want to say that I'm really happy with my GE70 notebook. Everything works perfectly fine except for this one thing... When I play games on the notebook while on battery power, the performance is very poor. For example when I normally on AC power get around 250 FPS, it drops down all the way to around 40-60 FPS on battery power. In real demanding games this can be very annoying, since I have to play on AC power constantly for games like Battlefield 4. Although this isn't really a problem for me most of the time, I just don't think that's the concept of a notebook... I monitored some statistics using MSI Afterburner and I noticed that the GPU's core clock is clocked all the way down to a constant 135MHz whilst gaming and the memory is clocked all the way down to around 400MHz. I'm using a GTX 765M. I've already tried changing settings in the Windows power settings, the NVIDIA control panel and I even looked into the BIOS, but I couldn't find a solution to this problem. Apparently the GTX 700M series GPUs support Optimus technology which should give a decent battery life using the full power of your GPU, but for me it doesn't do so. The battery gets drained in around 45 minutes while gaming on very poor performance. I hope you guys can help me out!

    Oyo,
    All Gamer NB are not done for play in battery mode, battery life are short and the power provide by a battery are far from the power adapter, this mean the NB get less power and your CPU and VGA are working in lower mode = lower result.
    This is normal because link to the battery, thepower adapter provide 120W, the battery 49W, you see the gap between them ^^
    So for play games, use the power adapter and the battery together, this is the best way.

  • Is there a way to create and edit a video in portrait mode / 9:16 in Premiere Elements 12 or 13. Playback on an iPhone in portrait position.

    I'm editing iPhone app video for posting as the first screen in the Apple app store. Required size for the video is 750:1334, 9:16. I there a way to setup the initial video in portrait mode? Many posts for how to rotate a specific clip, but I need the whole video in portrait position. How to accomplish this? I'm using Windows 7 and Premiere Elements 12 and now 13.

    LaurieFrick
    Thanks for giving this portrait Edit area monitor shape a look.
    I have Windows 7 64 bit and Windows 8.1 64 bit and have done the work using Premiere Elements 12/12.1 on Windows 7 64 bit.
    Here is a step by step
    Find the DSLR [email protected] file
    Local Disk C\Program Files\Adobe\Adobe Premiere Elements 12\Settings\SequencePresets\NTSC\DSLR\1080p\
    And, in that 1080p Folder is the DSLR 1080p30@ 29.97.sqpreset file that you seek.
    1. Copy the
    DSLR 1080p30@ 29.9.sqpreset fileI
    and paste it to the computer desktop in a newly created folder named 1920p.
    2. Open the 1920p Folder and edit the ..sqpreset file in Notepad.
    To do that right click the .sqpreset, and, from the pop up, select Open With and then Notepad.
    3. In the Notepad document, you are going to edit in only 2 places for now, switching 1920 and 1080 to 1080 and 1920 at the top and bottom
    sections of the Notepad document.
    Then go specifically File Menu/Save of the Notepad document and hit Save.
    4. At this point, you have the edited.sqpreset file in the 1920p Folder on the computer desktop.
    (Change the name of the edited .sqpreset file so that it = DSLR [email protected])
    Move the 1920p Folder from there to add it to the 1080p Folder, the location where you found the original .sqpreset file that you edited.
    Close out of there.
    5. Back in the Premiere Elements 12/12.1, manually set the project preset to the new project preset
    6. When you import your 1080 x 1920 9:16 video file into the project and drag it into the Timeline, you should see
    Note that there is no orange line over that Timeline content, indicating that you are seeing
    the best possible preview of the image.
    7. As I wrote before,
    But, here is the hang up. If I render the Timeline content to get the best possible preview at the Edit level, the video in the Edit area monitor
    squeezes in resulting in black borders to the left and right of the video image on screen.
    But, otherwise I found I had no problems editing or exporting, just had to keep away from Timeline rendering for previewing after an edit. Need to find the missing ingredient for 100%.
    I will be writing this up in my blog in a more organized fashion including how I got the project preset description in Change Settings to agree with the changes. Probably I will do that in the morning.
    ATR

Maybe you are looking for

  • Number range for extended classic scenario

    Hi, we've activated extended classic scenario for some defined categories (material groups), other will use classic scenario as before. It works, we are able to craete POs in both ways. I've tried to use the same number range for backend POs and for

  • Using Airport Express as a remote WDS station to connect two wired LANs

    I have an Airport Extreme base station connected to a cable modem that gives internet access to both wired and wireless clients. I would like to wirelessly connect this base station to an airport express in a nearby building that also has a wired net

  • Colorize a black and white photo

    Hello all! I'm new to Mac and Aperature and sure would appreciate a little help. I'm wondering if I can add color to a black and white photo. For example, a picture of my daughter, and I'd like to add color to the flowers in her hair. Thank you! dwat

  • Solutions for hard to see HDD LED

    I was looking for some easier ways to see the hard drive activity since HP has moved the HDD LED to the left or right side of the newer laptops.  I really hate not being able to always see what the hard drive is doing since the change in placement of

  • Help me, O-b-noir-obe!

    b noir, I need your help! I posted my problem in another thread a couple of weeks ago, and I've been pulling my hair out since, trying to find a solution. I've tried EVERYTHING and need some fresh ideas. Help me, Obnoirobe!! Many thanks in advance! h