Poor performance when using kde desktop effect

Hey,
I'm having trouble when using kde effet (system settings -> desktop -> desktop effects).
I have a dual core E5200 3 ghz, 2 GB memory pc8500 and an HD4850 using fglrx driver, but I got incredible bad performance when using desktop effect and watching video, I can barely watch a 800x600 video in full screen mode without having bad performance, X getting up to 40% cpu usage.
Its really like my graphic card isnt handling the rendering stuff but 3D acceleration is working, I can play 3D game without problem so far (as long as the deskstop effect arent enabled cause the cpu have like a hard time handling both for recent game)
So I guess its some trouble with 2D acceleration or something like that, I read that some people had such issue, but I didnt figure a way to fix it.
Here is my xorg.conf, in case something is wrong with it :
Section "ServerLayout"
Identifier "X.org Configured"
Screen 0 "aticonfig-Screen[0]-0" 0 0
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "Files"
ModulePath "/usr/lib/xorg/modules"
FontPath "/usr/share/fonts/misc"
FontPath "/usr/share/fonts/100dpi:unscaled"
FontPath "/usr/share/fonts/75dpi:unscaled"
FontPath "/usr/share/fonts/TTF"
FontPath "/usr/share/fonts/Type1"
EndSection
Section "Module"
Load "dri2"
Load "extmod"
Load "dbe"
Load "record"
Load "glx"
Load "dri"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "InputDevice"
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "ZAxisMapping" "4 5 6 7"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
EndSection
Section "Monitor"
Identifier "aticonfig-Monitor[0]-0"
Option "VendorName" "ATI Proprietary Driver"
Option "ModelName" "Generic Autodetecting Monitor"
Option "DPMS" "true"
EndSection
Section "Device"
### Available Driver options are:-
### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
### [arg]: arg optional
#Option "ShadowFB" # [<bool>]
#Option "DefaultRefresh" # [<bool>]
#Option "ModeSetClearScreen" # [<bool>]
Identifier "Card0"
Driver "vesa"
VendorName "ATI Technologies Inc"
BoardName "RV770 [Radeon HD 4850]"
BusID "PCI:8:0:0"
EndSection
Section "Device"
Identifier "aticonfig-Device[0]-0"
Driver "fglrx"
BusID "PCI:8:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
SubSection "Display"
Viewport 0 0
Depth 1
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 4
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 8
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 15
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 16
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
Section "Screen"
Identifier "aticonfig-Screen[0]-0"
Device "aticonfig-Device[0]-0"
Monitor "aticonfig-Monitor[0]-0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
Thank you for any help.

Section "Device"
### Available Driver options are:-
### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
### [arg]: arg optional
#Option "ShadowFB" # [<bool>]
#Option "DefaultRefresh" # [<bool>]
#Option "ModeSetClearScreen" # [<bool>]
Identifier "Card0"
Driver "vesa"
VendorName "ATI Technologies Inc"
BoardName "RV770 [Radeon HD 4850]"
BusID "PCI:8:0:0"
EndSection
and
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
EndSection
I see no reason for those to be there.
make a backup of your xorg.conf and remove / comment those lines.

Similar Messages

  • Is anyone able to explain really poor performance when using 'If Exists'?

    Hello all.  We've recently had a performance spike when using the 'if exists' construct, which is a construct that we use through much of our code.  The problem is, it appears illogical since if can be removed via a tiny modification that does
    not change the core code.
    I can demonstrate  
    This is the (simplified) format of the base original code.  It's purpose is to identify when a column value has changed comparing to a main table and a complex view
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    This is doing a table scan, however the table only has 17000 rows while the view only has 7000 rows.  The sql executes in approximately 3 seconds.
    However if we add the 'If Exists' construct around the original query, like such:
    if exists (
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    print 1
    The sql now takes over 2 minutes to run.  Note that the core sql is unchanged.  All I have done is wrap it with 'If Exists'
    I can't fathom why the if exists construct is taking so much longer, especially since the code code is unchanged, however more importantly I would like to understand why since we commonly use this type of syntax.
    Any advice would be greatly appreciated

    OK, that's interesting.  Adding the top 1 clause greatly affects the performance (in a bad way).
    The original query (as below) still runs in a few seconds.
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    The 'Top 1' query (as below) takes almost 2 minutes however.  It's exactly the same query, but with 'top 1' added to it.
    select top 1 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    I suspect that the top 1 is performing a very similar operation to the exists, in that it is 'supposed' to exit as soon as it finds a single row that satisfies it's results.
    It's still not really any closer to making me understand what is causing the issue however.

  • Poor Performance when using Question Pooling

    I'm wondering if anyone else out there is experiencing
    Captivate running very slow when using question pooling. We have
    about 195 questions with some using screenshots in jpeg format.
    By viewing the Windows Task Manager, CP is using anywhere
    between 130 to 160 K worth of memory. What is going on here? It's
    hammering the system pretty hard. It takes a large effort just to
    reposition the screenshot or even move a distractor.
    I'm running this on a 3.20GHz machine with 3GB of RAM.
    Any Captivate Gurus out there care to tackle this one?
    Help.

    MtnBiker1966,
    I have noticed the same problem. I only have 60 slides with
    43 questions and the Question Pool appears to be a big drain on
    performance. Changing the buttons from Continue to Go to next slide
    helped a little, but performance still drags compared to not using
    a question pool. I even tried reducing the number of question
    pools, but that did not affect the performance any. The search
    continues.
    Darin

  • BIBean poor performance when using Query.setSuppressRows()

    Does anyone have experience in suppressing N/A cell values using BIBean? I was experimenting the use of Query.setSuppressRows(DataDirector.NA_SUPPRESSION). It does hide the rows that contains N/A values in a crosstab.
    The problem is that the performance degrades significantly when I started drilling down the hierarchy. Without calling the method, I was able to drill into the pre-aggregated hierarchy in a few seconds. But with setSuppressRows(), it took almost 15 minutes to do the same drill.
    Just for a comparison, I used DML to report on the same data that I wanted to drill into. With either 'zerorow' to yes or no, the data was fetched less than a second.
    Thanks for any help.
    - Wei

    At the moment we are hoping this will be fixed in a 10g database patch which is due early 2005. However, if you are using an Analytic Workspace then you could use OLAP DML to filter the zero and NA rows before they are returned to the query. I think this involves modifying the OLAP views that return the AW objects via SQL commands.
    Hope this helps
    Business Intelligence Beans Product Management Team
    Oracle Corporation

  • Poor performance when using skip-level hierarchies

    Hi there,
    currently we have big performance issues when drilling in a skip-level hierarchy (each drill takes around 10 seconds).
    OBIEE is producing 4 physical SQL statements when drilling f.e. into the 4th level (for each level one SQL statement). The statements runs in parallel and are pretty fast (the database doesn't need more than 0,5 seconds to produce the result), but ... and here we have probably somewhere a problem ... putting all the 4 results together in OBIEE takes another 8 seconds.
    This are not big datasets the database is returning - around 5-20 records for each select statement.
    The question is: why does it take so long to put the data together on the server? Do we have to reconfigure some parameters to make it faster?
    Please guide.
    Regards,
    Rafael

    If you really and exclusively want to have "OBIEE can handle such queries" - i.e. not touch the database, then you had best put a clever caching strategy in place.
    First angle of attack should be the database itself though. Best sit down with a data architect and/or your DBA to find the best setup possible physically and then when you've optimized that (with regard to the kind of queries emitted against it) you can move up to the OBIS. Always try t fix the issue as close to the source as possible.

  • Poor performance when using drop down box on web report

    We are using dropdown box functionality in web reporting to allow easy selection of characteristics values. We have 4 dropdown boxes
    that represents Region, Area, Country and division.
    We need to use booked_values = 'Q' to show only relevant values for selection in the dropdown. However the issue is takes a long time for results to appear
    on the template. Read from Fact table is quick but process of deriving drop down values is very slow.
    <object>
             <param name="OWNER" value="SAP_BW"/>
             <param name="CMD" value="GET_ITEM"/>
             <param name="NAME" value="DROPDOWNBOX_1"/>
             <param name="ITEM_CLASS" value="CL_RSR_WWW_ITEM_FILTER_DDOWN"/>
             <param name="DATA_PROVIDER" value="DATAPROVIDER_1"/>
             <param name="BORDER_STYLE" value="BORDER"/>
             <param name="GENERATE_CAPTION" value=""/>
             <param name="IOBJNM" value="ZPC_ORG14"/>
             <param name="BOOKED_VALUES" value="Q"/>
             <param name="TARGET_DATA_PROVIDER_1" value="DATAPROVIDER_1"/>
             ITEM:            DROPDOWNBOX_1
    </object>
    Do you have any suggestions on how to improve performance on drop down?
    Thanks, Jay

    Dear Jayant Dixit,
    1) If the values in the drop down box....are NOT dependent on user selection...then webserver can send them along with the first page....and cache them on client m/c...for multiple client/server dialogs....
    2) If the values are changing based on what the user selected from a previous input......then...you need to do some research on how SAP Webserver could optimize the "network traffic"......For example: a) Apache webserver has a module that compresses the content on the webserver side and then send to the client...b)A Client browser plugin decompresses the received packet and displays appropriately....
    3) We may need to research the latest SAP webserver capabilities....
    Good luck, BB

  • Poor performance when using bind variable in report

    I have a report that takes 1 second to run if i 'hardcode' a particular value into the where clause of the report. However, if i now replace the hardcoded value with a bind variable and set the default value for the bind variable to be the (previous) hard coded value the report now takes 50 seconds to run instead of 1 second!!
    Has anyone else seen this behaviour - any suggestions to workaround this will be gratefully received

    More info
    SELECT patch_no, count(*) frequency
    FROM users_requests
    WHERE patchset IN (SELECT arps2.patchset_name
    FROM aru_bugfix_relationships abr, aru_bugfixes ab, aru_status_codes ac,
    aru_patchsets arps, aru_patchsets arps2
    WHERE arps.patchset_name = '11i.FIN_PF.E'
    AND abr.bugfix_id = ab.bugfix_id
    AND arps.bugfix_id = ab.bugfix_id
    AND abr.relation_type = ac.status_id
    AND arps2.bugfix_id = abr.related_bugfix_id
    AND abr.relation_type IN (601, 602))
    AND included ='Y'
    GROUP BY patch_no
    order by frequency desc, patch_no
    Runs < 1 sec from SQL navigator and from portal (if i hardcode the value for fampack.
    Takes ~50 secs if i replace with :fampack and set default value to 11i.FIN_PF.D

  • [Solved] KDE Desktop Effects no longer working

    Hello,
    I just did an update, (I didn't think it was a major one, I don't recall seeing graphics update, though I'm not sure how to confirm - is there a way to see recently updated packages in yaourt?) and after a brief issue with powertop that was solved by a restart (https://bbs.archlinux.org/viewtopic.php?id=179789 - it may be unrelated though) I found that when I restarted, KWin Crashed, and since then, my desktop effects are not working.
    I have gone to the KDE Desktop Effects Settings Menu, and under the advanced menu, set the compositing type back to OpenGL 3.1, clicked apply, and confirmed that I wanted to try it even though it had evidently crashed Kwin. I then received a message stating "21 desktop effects could not be loaded". When I clicked details, a popup window said:
    "For technical reasons it is not possible to determine all possible error causes.
    Desktop effect system is not running."
    For the record, I am on a Thinkpad X201 with intel graphics.
    Any ideas?
    Last edited by TheGuyWithTheFace (2014-04-19 07:35:43)

    Some Extra information:
    I just tried a restart, that didn't work.
    I then went back into effect settings, and saw that "enable effects at startup" was not checked. I also saw that under the advanced settings, Qt Graphics system was set to raster. Since I'm not sure which it was supposed to be on originally, but the current configuration wasn't working, I also set that to Native, and then clicked apply. KWin then crashed, here is the output:
    Application: KWin (kwin), signal: Aborted
    Using host libthread_db library "/usr/lib/libthread_db.so.1".
    To enable execution of this file add
    add-auto-load-safe-path /usr/lib/libstdc++.so.6.0.19-gdb.py
    line to your configuration file "/home/perry/.gdbinit".
    To completely disable this security protection add
    set auto-load safe-path /
    line to your configuration file "/home/perry/.gdbinit".
    For more information about this security protection see the
    "Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
    info "(gdb)Auto-loading safe path"
    [Current thread is 1 (Thread 0x7fafaa788800 (LWP 796))]
    Thread 4 (Thread 0x7faf8beb3700 (LWP 797)):
    #0 0x00007fafa3f7e3f8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
    #1 0x00007fafa420a034 in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib/libQtCore.so.4
    #2 0x00007fafa41fd745 in ?? () from /usr/lib/libQtCore.so.4
    #3 0x00007fafa4209b7f in ?? () from /usr/lib/libQtCore.so.4
    #4 0x00007fafa3f7a0a2 in start_thread () from /usr/lib/libpthread.so.0
    #5 0x00007fafa9fa9d1d in clone () from /usr/lib/libc.so.6
    Thread 3 (Thread 0x7faf8b27a700 (LWP 798)):
    #0 0x00007fafa3f7e3f8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
    #1 0x00007fafa420a034 in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib/libQtCore.so.4
    #2 0x00007fafa41fd745 in ?? () from /usr/lib/libQtCore.so.4
    #3 0x00007fafa4209b7f in ?? () from /usr/lib/libQtCore.so.4
    #4 0x00007fafa3f7a0a2 in start_thread () from /usr/lib/libpthread.so.0
    #5 0x00007fafa9fa9d1d in clone () from /usr/lib/libc.so.6
    Thread 2 (Thread 0x7faf8a878700 (LWP 799)):
    #0 0x00007fafa9fa2fd3 in select () from /usr/lib/libc.so.6
    #1 0x00007fafa42e6c13 in ?? () from /usr/lib/libQtCore.so.4
    #2 0x00007fafa4209b7f in ?? () from /usr/lib/libQtCore.so.4
    #3 0x00007fafa3f7a0a2 in start_thread () from /usr/lib/libpthread.so.0
    #4 0x00007fafa9fa9d1d in clone () from /usr/lib/libc.so.6
    Thread 1 (Thread 0x7fafaa788800 (LWP 796)):
    [KCrash Handler]
    #5 0x00007fafa9ef9389 in raise () from /usr/lib/libc.so.6
    #6 0x00007fafa9efa788 in abort () from /usr/lib/libc.so.6
    #7 0x00007fafa9ef24a6 in __assert_fail_base () from /usr/lib/libc.so.6
    #8 0x00007fafa9ef2552 in __assert_fail () from /usr/lib/libc.so.6
    #9 0x00007fafa7d6f8d9 in ?? () from /usr/lib/libX11.so.6
    #10 0x00007fafa7d6f96e in ?? () from /usr/lib/libX11.so.6
    #11 0x00007fafa7d6fc4d in _XEventsQueued () from /usr/lib/libX11.so.6
    #12 0x00007fafa7d72905 in _XGetRequest () from /usr/lib/libX11.so.6
    #13 0x00007fafa4dfe587 in ?? () from /usr/lib/libGL.so.1
    #14 0x00007fafa4dfbf6b in ?? () from /usr/lib/libGL.so.1
    #15 0x00007faf89bef6db in ?? () from /usr/lib/xorg/modules/dri/i965_dri.so
    #16 0x00007faf89befa33 in ?? () from /usr/lib/xorg/modules/dri/i965_dri.so
    #17 0x00007faf89befb8b in ?? () from /usr/lib/xorg/modules/dri/i965_dri.so
    #18 0x00007faf89b89ef6 in ?? () from /usr/lib/xorg/modules/dri/i965_dri.so
    #19 0x00007fafa4dfd898 in ?? () from /usr/lib/libGL.so.1
    #20 0x00007fafa4dd804c in glXMakeCurrentReadSGI () from /usr/lib/libGL.so.1
    #21 0x00007fafaa34e479 in ?? () from /usr/lib/libkdeinit4_kwin.so
    #22 0x00007fafaa350029 in ?? () from /usr/lib/libkdeinit4_kwin.so
    #23 0x00007fafaa34aefc in ?? () from /usr/lib/libkdeinit4_kwin.so
    #24 0x00007fafaa32f4b5 in ?? () from /usr/lib/libkdeinit4_kwin.so
    #25 0x00007fafaa2b6c75 in ?? () from /usr/lib/libkdeinit4_kwin.so
    #26 0x00007fafa431d6ea in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib/libQtCore.so.4
    #27 0x00007fafa41fa958 in QFutureWatcherBase::event(QEvent*) () from /usr/lib/libQtCore.so.4
    #28 0x00007fafa3491f0c in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
    #29 0x00007fafa34984d0 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
    #30 0x00007fafa8bfa88a in KApplication::notify(QObject*, QEvent*) () from /usr/lib/libkdeui.so.5
    #31 0x00007fafa4309a6d in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/libQtCore.so.4
    #32 0x00007fafa430caad in QCoreApplicationPrivate::sendPostedEvents(QObject*, int, QThreadData*) () from /usr/lib/libQtCore.so.4
    #33 0x00007fafa352f71c in ?? () from /usr/lib/libQtGui.so.4
    #34 0x00007fafa43086cf in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQtCore.so.4
    #35 0x00007fafa43089c5 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQtCore.so.4
    #36 0x00007fafa430dae9 in QCoreApplication::exec() () from /usr/lib/libQtCore.so.4
    #37 0x00007fafaa2e7896 in kdemain () from /usr/lib/libkdeinit4_kwin.so
    #38 0x00007fafa9ee5b05 in __libc_start_main () from /usr/lib/libc.so.6
    #39 0x00000000004006fe in _start ()

  • XBMC/mplayer and KDE Desktop effects

    Hi,
    i have setup a arch based HTPC with XBMC on top of KDE, but there is one thing i were not able to resolve: The XBMC GUI and playing a video on it gets laggy as soon as i DISABLE the kde desktop effects. My system is i3/sandy bridge based with integrated intel graphics, vaapi installed.
    1. tried different playback settings within XBMC, same results
    2. as soon as i disable the kde desktop effects, xbmc becomes slow and laggy, even if i dont restart it, if i enable desktop effects its smooth again
    3. mplayer reacts the same way when used with -vo gl or -vo gl2, so it could be somehow opengl related. With -vo xv it runs smooth in both configurations.
    4. Cpu load is constantly <20%
    5. tried as root user, same results
    6. glxinfo reports direct rendering is enabled
    so any ideas how to resolve that problem? or how to find out whats going on?
    you may ask why i want to disable desktop effects, well with desktop effects on, videos run almost perfect, but sometimes there are smaller lags that, i guess, can be related to the desktop effects eating resources, so i want to try without them.
    here is the custom part of my xorg config, tearfree was needet to get a vsync-enabled video playback:
    #20-intel.conf
    Section "Device"
    Identifier "Intel Graphics"
    Driver "intel"
    Option "AccelMethod" "sna"
    Option "TearFree" "true"
    #Option "AccelMethod" "uxa"
    #Option "AccelMethod" "xaa"
    EndSection
    Section "dri"
    Mode 0666
    EndSection

    kichawa wrote:@ChemBro root access? Where is more info about that ?
    http://nvidia.custhelp.com/app/answers/detail/a_id/3109
    Security vulnerability CVE-2012-0946 in the NVIDIA UNIX driver was disclosed to NVIDIA on March 20th, 2012. The vulnerability makes it possible for an attacker who has read and write access to the GPU device nodes to reconfigure GPUs to gain access to arbitrary system memory. NVIDIA is not aware of any reports of this vulnerability, outside of the disclosure which was made privately to NVIDIA.

  • DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record

    Hi,
    Please read this message in the [ABAP Performance and Tuning|DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record; section and see if you have any advise.
    Best Regards,
    Marjo

    Hi,
    I met this issue when I tried to write vaule to massive fields of excel range.
    And I solve this issue by using CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_EXPORT.
    So, I think you may fix it in the same way.
    1. Select range in by the I_OI_SPREADSHEET
    2. Call method I_OI_DOCUMENT_PROXY->COPY_SELECTION
    3. Call method CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_IMPORT
    Cheers,

  • Printing to a Printer Connected to TIme Machine when using Remote Desktop

    Hi,
    I am having trouble printing to a printer connected to our wireless network via Time Capsule when I am on my remote desktop connection. Printing from all other programs work. I downloaded Bonjour on our Windows based server, and it did not detect any printers. When I am connected directly to the printer, I can print when using remote desktop. Any suggestions are much appreciated.
    Thanks!
    Mac OS X (10.5.4)

    Airport Extreme/Express do emulate Jetdirect/port 9100 printing, but that isn't what they use for printing from OS X. OS X and the Bonjour/airport software use USB port redirection, a unique and separate method.
    If the HP driver supports any protocol over ethernet, it would be appletalk. Does that router server use appletalk?

  • RMBP reaches 104C when using remote desktop.  Is this normal?

    I have a new rMBP that is reaching temps of 104C when using remote desktop or other moderately intensive applications.  Is this normal?

    No, that's way too hot. It's only a few degrees below automatic thermal shutdown.

  • Software shuts down when using the lightning effects filter. Help!

    Software shuts down when using the lightning effects filter. Help!

    Have you done a Forum search yet?
    Is your GPU driver up to date?
    What are your version of Photoshop, OS and GPU anyway?

  • Poor OEL5-3 performance when using over 1Gb of RAM

    Hi, I have a Linux distribution ( Oracle Enterprise Linux 5.3 i.e. Redhat ) that I have installed. It works fine when I used 2*512Mb dimms or replace them with a single 1Gb dimm. However when I try to go above 1 Gb the bootup and general performance deteriorates badly. The BIOS picks up the memory changes ok and I am using the same type of memory sticks. It makes no difference if I load a single memory channel with the 2 sticks or balance the sticks over the 2 channels - I still get poor performance. When I return to 1Gb of RAM performance is great
    In terms of install I just followed the Oracle Release note and Readme on the DVD itself. To be honest they are incredibly basic. There does not appear to be any specific OEL 5-3 installation manual on Technet. I simply followed the prompts on scereen and choose a default install
    I have seen quite a few hits concerning problems with performance over 1Gb for Linux such as having to enable Highmem support in the kernel - however I have not seen any instruction how to check or do this - any ideas ?

    jimthompson wrote:
    Hi, I have a Linux distribution ( Oracle Enterprise Linux 5.3 i.e. Redhat ) that I have installed. It works fine when I used 2*512Mb dimms or replace them with a single 1Gb dimm. However when I try to go above 1 Gb the bootup and general performance deteriorates badly. The BIOS picks up the memory changes ok and I am using the same type of memory sticks. It makes no difference if I load a single memory channel with the 2 sticks or balance the sticks over the 2 channels - I still get poor performance. When I return to 1Gb of RAM performance is great
    In terms of install I just followed the Oracle Release note and Readme on the DVD itself. To be honest they are incredibly basic. There does not appear to be any specific OEL 5-3 installation manual on Technet. I simply followed the prompts on scereen and choose a default install
    I have seen quite a few hits concerning problems with performance over 1Gb for Linux such as having to enable Highmem support in the kernel - however I have not seen any instruction how to check or do this - any ideas ?I still might have a concern about your hardware:
    Your two 512M sticks are undoubtably well matched and live comfortably together. Hopefully they did actually use dual channel mode.
    Your 1G lives well on its own as well
    However for the greater than 1G, ie 1G + 0.5G you may be using 2 sticks that do not live well together and are causing isses with delayed response. (At worst can only 0.5G is being recognised - (use grep MemTotal /proc/meminfo if necessary to check) ). In general issues seem better with DDR2 than with earlier memory in this regard. I suppose timing with memtest86 might be an interesting metric. I must admit myself i'd prefer to use 1g+1g or even 2g + 2g (i know on 32bit I might not see it all) rather than 1g+2g or 1g+0.5g.
    Anyway what I am trying to say is that hardware should not be eliminated as a possible cause at this stage.
    Rgds - bigdelboy

  • Poor performance when dragging item within a list loaded with images - Flex 4

    Hi,
    I have a custom built List component that is using a TileLayout. I am using a custom itemRenderer to load images into this list from our server (in this test, 44 images rae loaded). I have enabled dragEnabled and dragMove so that I can move items around within the list. The problem comes when I start dragging an item. The dragging operation is very slow and clunky.
    When I move the mouse to drag the item, the dropIndicator does not get refreshed for a few seconds and the movement feels like my PC is lagging pretty badly. I've also noticed that during this time my CPU usage is spiking up to around 25-40%. The funny part is I can scroll the list just fine without any lag at all, but the minute I drag the application starts to lag really bad. I do have some custom dragOver code that I used to override the dragOverHandler of the list control, but the problem persists even if I take that code out. I've tried with useVirtualLayout set to both true and false and neither setting made a difference. 
    Any ideas as to what could be causing the poor performance and/or how I can go abouts fixing it?
    Thanks a lot in advance!

    Ahh, good call about the Performance profiler. I'm pretty new to the profiling thing with Flex (haven't used Builder Pro before
    the Flex 4 beta) so please forgive me
    I found some interesting things running the performance profiler but I'm not sure I understand what to make of it all. I cleared the Performance Profile data when right before I loaded the images into the list. I then moved some images around and then captured the Profiling Data (If I understand Adobe correctly, this is the correct way to capture performance information for a set of actions).
    What I found is there is a [mouseEvent] item that took 3101ms with 1 "Calls" (!!!!). When I drill down into that item to see the Method Statistics, I actually see three different Callees and no callers. The sum of the time it took the Callees to execute does not even come close to adding up to the 3101 ms (about 40ms). I'm not sure what I can make of those numbers, or if they are even meaningful. Any insight into these?
    The only other items that stand out to me are [pre-render] which has 863ms (Cumulative Time) / 639ms (Self Time), [enterFrameEvent] which has 746ms / 6ms (?!), and [tincan] (what the heck is tincan?) which has times of 521ms for both Cumulative and Self.
    Can anyone offer some insight into these numbers and maybe make some more suggestions for me? I apologize for my ignorance on these numbers - as I said, I'm new to the whole Flex profiling thing.
    Many thanks in advance!
    Edit: I just did another check, this time profiling only from the start of my drag to the end of my drop, and I still see [mouseEvent] taking almost 1000ms of Cumulative Time. However, when I double click that item to see the Method Statistics, no Callers or Callees are listed. What's causing this [mouseEvent] and how come it's taking so long?

Maybe you are looking for

  • All mail, contacts, calendar copies to my applications

    all of my Outlook emails, contacts, calendar entries have been copying to my documents and to my applications.  I don't know if this is from ICloud or what but it is a problem.  My Macbook Pro is VERY slow now and takes forever to do anything is only

  • I can't do anything about error 150:30

    I've tried everything I've seen online about fixing error 150:30 that my licensing has stopped working but nothing seems to work

  • Queuing queries -  best possible way with Pros and cons

    Hi, I have multiple queries on Queues. Please provide valuable answers and receive good rewards: 1. is it possible to seclude specific one group of business messages (x customer) to seperate queues? this is to ensure that our messages are secluded fr

  • Portege R830-10R: No USB connection after recovery from sleep mode

    Good day! I've been experiencing the following problem on my Portege R830-10R for a long time. Quite frequently, when it recovers after sleep mode it loses ability to recognize connected USB devices and built-in 3G modem does not work as well. After

  • I can edit my video clips, but I can't export the sequence.

    OK. I imported 30 GBs of my footage from my SD using my Macbook Pro's internal card reader for a video; while it was importing I noticed my computer was getting low on memory and the import was going to take all the memory (rookie mistake) and then s