Poor Elite3D OpenGL performance in Solaris 10

I'm running an old Ultra10 with an Elite3D-m3. Since I installed Solaris 10, I've had a very slow desktop and awful performance in 3D applications. This tells me that there's something wrong between the video card and OpenGL, like it's in software graphics mode. How can I tell if I'm actually running in software OpenGL or if it's the proper performance for the card? Do I need special drivers to run it since the card is so old?

I found out about Update 4 very shortly after posting this, and I have now upgraded. Unfortunately, the problem still persists.

Similar Messages

  • Poor I/O Performance on Solaris - v1.4.1_01

    Does anyone have any comments on the following?
    It's an I/O analysis done to determine which Java
    methods might be used to replace an existing C++
    platform-specific file re-compression subsystem.
    The system has to handle up to 200,000 files per
    day (every day).
    Java IO test results for converting ZERO_ONE compressed
    files to standard compressed files.
    Java 1.4.1, 12-04-2002
    The input dataset contains 623,230,991 bytes in 1391 files.
    The input files are in ZERO_ONE compression format.
    For all tests:
    1) an input data file was opened in buffered mode.
    2) the data was read from the input and expanded
    (byte by byte).
    3) the expanded data was written to a compressing
    output stream as it was created.
    4) repeat 1 thru 3 for each file.
    64K buffers were used for all input and output streams.
    Note: Items marked with "**" hang at random on Solaris
    (2.7 & 2.8) when processing a large number of small files. They always hang on BufferedInputStream.read().
    There may be a deadlock situation with the 'process
    reaper' because we're calling 'exec()' and 'waitFor()'
    so quickly. The elapsed times for those items are
    estimates based on the volume of data processed up to
    the point where the process hung. This 'bug' has been
    reported to Sun.
    -- elapsed time --
    NT Solaris 2.7 Method
    n/a 18 min Current C++ code:
    fopen(r) -> system(compress)
    19 min 19 min ** BufferedInputStream -> exec(compress)
    29 min 21 min 1) BufferdInputStream -> file
    2) exec(compress file)
    24 min 42 min ** BufferedInputStream -> exec(gzip)
    77 min 136 min BufferedInputStream -> GZIPOutputStream
    77 min -- BufferedInputStream -> ZipOutputStream
    The performance of GZIPOutputStream and ZipOutputStream
    makes them useless for any Production system. The 2x
    performance degradation on Solaris (vs NT) for these two
    streams is surprising. Does this imply that the 'libz'
    on Solaris is flawed? Notice that whenever 'libz' is
    involved in the process stream (exec(gzip),
    GZIPOutputStream, ZipOutputStream) the elapsed time
    climbs dramatically on Solaris.

    Re-submitted Performance Matrix with formatting retained.
    Note: for the "-> system()" and "-> exec()" methods, we write to the
    STDIN of the spawned process.
    -- elapsed time --
    NT     Solaris 2.7 Method
    n/a    18 min      Current Solaris C++ code:
                         fopen(r) -> system("compress -c >aFile")
    19 min 19 min **   BufferedInputStream -> exec("compress -c >aFile")
    29 min 21 min      1) BufferdInputStream -> "aFile"
                       2) exec("compress aFile")
    24 min 42 min **   BufferedInputStream -> exec("gzip -c aFile")
    77 min 136 min     BufferedInputStream -> GZIPOutputStream("aFile")
    77 min --          BufferedInputStream -> ZipOutputStream("aFile")

  • When will the atrocious  OpenGL performance be fixed?

    I have a 15" MBP w/ the 8600m GT and have been suffering with absolutely terrible OpenGL performance ever since I bought it. I am aware this is not a new issue and I am not alone, but I was reminded of just how bad the situation is when I was reading a review of a new Santa Rosa-based MacBook, and it included OpenGL performance tests. Of course the MBP beat the regular MB in OpenGL performance, but the iMac with a Radeon 2600 beat the MBP by about 30%, even though the MBP has vastly superior graphics hardware. This is unacceptable. This has been unacceptable for many many many months. Does anyone from Apple ever read these forums? You've had months and months to fix this issue. If an MBP user just installs Boot Camp he instantly gets double the OpenGL performance in Windows. Have I mentioned this is unacceptable? When will this be fixed?

    in addition to poor open GL performance, i've also had problems with any program utilizing antialiasing support. i recently sat down and called apple and had a discussion with their support line. while i didn't mention the poor open GL performance, i did mention the graphical errors i was experiencing with my MBP. i was told these issues are not normal, and have a replacment on the way. while these issues are some what numerous, i've not been able to find much more than a few articles or discussions on the topic. i'm hoping this means it is contained to a few MBPs and not all. we'll see when the replacment comes. if not, however, i'd consider a class action suit. the machine i bought was sold as a portable gaming machine, (apple.com/games/hardware) and thats not what i've gotten. i am very disappointed that apple has not even acknowledged the issue to let us know some fix is in the works. if enough of us make a big enough noise about it, perhaps we can get some sort of reply. i'd recommend you sending a note to apple as mentioned, then take some time to sit and give apple a call. they are very interested in listening to you, don't feel silly about calling about game perfromance, they are advertising these machines as such.
    cheers

  • Nokia 5800 - poor wi-fi performance?

    I own both 5800 and E51. Both wi-fi chips are set to Tx 100 mW. I notice that E51 detects more networks, connect to more networks than 5800. I compared the signal strength between the two phones. E51's signal is always 20-30% higher than 5800's. I must say 5800 is quite poor wi-fi performer.

    Tough mate you must have a phone with the well known wi-fi poor connection error .
    Easy to fix .
        jje

  • Poor Wi-fi Performance

    Hi all,
    I have just installed Lion into my early 2011 MacBook Pro and I noticed my Wi-fi seems slower than usual. I looked at the System Information and I noticed the Wi-fi Card Type shows "Third-Party Wireless Card". Shouldn't it show "AirPort Extreme"? Could this be the problem to my poor Wi-fi Performance?
    Thanks

    SOLUTION FOUND to MacBook Pro poor Wi-Fi wireless signal.
    First, let's talk about the actual problem: too much signal noise. There is a hidden app in OSX called "Wi-Fi Diagnostics" This will tell you everything you need to know why your Wi-Fi is not working!
    http://support.apple.com/kb/HT5606
    HINT: It is included in 10.7, just look for it.
    The solution!
    Turn off your wi-fi and run your internet through your home's electrical wires. Power-line adapters are devices that turn a home's electrical wiring into network cables for a computer network.
    http://reviews.cnet.com/2733-3243_7-568-8.html
    I purchased TP-LINK TL-PA511 and for the first time I'm downloading with ALL my internet speed (30mb down / 5mb up) tested via www.speedtest.net. I'M SOOOOO HAPPY!!!!
    http://www.amazon.com/TP-LINK-TL-PA511-Powerline-Starter-Kit/dp/B0081FLFQE/ref=s r_1_1?ie=UTF8&qid=1385395374&sr=8-1&keywords=TP-LINK+TL-PA511

  • OpenGL performance degradation after sleep

    On 8 core Mac Pro early 2008 with Radeon HD 5770 I'm witnessing OpenGL performace severly drops after I woke computer from sleep. So far, I've seen it in every OpenGL application I have.
    Filed a bug report to Apple about this (9828111).
    The question is: does anyone else experience similar problems?

    I have an Apple 30" Cinema HD display connected to a Radeon HD 5870 1GB in a MacPro3,1 running Mountain Lion. There are threads here that say changing the screen resolution will fix the problem. Changing the screen resolution did not restore OpenGL performance for me after waking from sleep.
    I figured that if changing resolutions is supposed to fix the problem, then perhaps the thing the fixes the problem is that the Radeon timings (pixel clock) are altered/recalculated/reset/whatever whenever the resolution changes.
    But all the resolutions for the LCD display are scaled resolutions which means the Radeon is always outputing 2560x1600. The framebuffer is of different sizes but the output timings are unchanged.
    Therefore, the solution might be to create a custom resolution (using something like SwitchResX) which is not a scaled resolution. The EDID of the Cinema HD display says it supports 1280 x 800 @ 59.910Hz and 2560 x 1600 @ 59.860Hz. I used SwitchResX to add a 1280 x 800 @ 59.910Hz non-scaled resolution and restarted the Mac.
    Now when I wake the Mac from sleep, I can use that 1280 x 800 non-scaled resolution to restore OpenGL performance. It works.
    Note that the Apple graphics drivers will not allow both a scaled and non-scaled version of the same resolution. When I add the non-scaled 1280 x 800, I can nolonger use the scaled 1280 x 800. The difference between 1280 x 800 scaled and non-scaled is that for scaled, the graphics card does the scaling and adds filtering to the pixels making them blurry and outputs 2560 x 1600. Non-scaled outputs 1280 x 800 and the Cinema HD itself quadruples the pixels to 2560 x 1600 without filtering (making them very sharp and crisp).

  • How to install OpenGL files on Solaris 10

    I found OpenGL stuff is missing from the Solaris 10 system when I installed gtkglext from its source code and got an error after the 'configure' command.
    bash-3.00$ ./configure
    checking GL/glx.h usability... no
    checking GL/glx.h presence... no
    checking for GL/glx.h... no
    configure: error: Cannot find GLX header
    I remember that for software groups during Solaris 10 installation, I did choose the Entire Solaris Software Group to guarantee the developement packages are installed. Could you give some help on how to install opengl stuff from Solaris 10 installation disks?

    It's hard to believe that no one here (official Sun
    developer network) can answer my question. Maybe few
    people develop under Solaris.That must be it.

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • ZBook 17 g2 - poor DPC Latency performance when running from z Turbo Drive PCIe SSD

    I'm setting up a new zBook 17 g2 and am getting very poor DPC latency performance (> 6000 us) when running from the PCIe SSD. I've re-installed the OS (Win 7 64 bit) on both the PCIe SSD and a SATA HDD and the DPC latency performance is fine when running from the HDD (50 - 100 us) but horrible when running from the PCIe SSD (> 6000 us).  I've updated the BIOS and tried every combination of driver and component enabling/disabling I can think of.  The DPC latency is extremely high from the initial Windows install with no drivers installed.  Adding drivers seems to have no effect on the DPC latency. Before purchasing the laptop I found this review: http://www.notebookcheck.net/Review-HP-ZBook-17-E9X11AA-ABA-Workstation.106222.0.html where the DPC latency measurement (middle of the page) looks OK.  Of course, this is the prior version of the laptop and I believe it does not have the PCIe SSD.  Combining that with the fact that I get fine performance when running from the HDD I am led to believe that the PCIe SSD is the cause of the problem. Has anyone found a solution to this problem?  As it stands right now my zBook is not usable for digital audio work when running from the PCIe SSD.  But it cost me a lot of money so I'd sure like to use it...! Thanks, rgames

    Hi mooktank, No solution yet but, as of about six weeks ago, HP at least acknowledged that it's a problem (finally).  I reproduced it perfectly on another zBook 17 g2 and another PCIe SSD in the same laptop and HP was able to reproduce the problem as well.  So the problem is clearly in the BIOS or with some driver related to the PCIe SSD.  It could also be with the firmware in the drive, itself, but I can't find any other PCIe drives in the 60 mm form factor.  So there's no way to see if a differnt type of drive would fix the problem. My suspicion is that it's related to the PCIe sleep states - those are known to cause exactly these types of problems because the drive takes quick "naps" to save power and there's a delay when it is told to wake back up.  That delay causes a delay in the audio buffer that results in pops/crackles/stutters that would never be noticed doing other tasks like video editing or CAD work .  So it's a problem specific to folks who need low-latency audio performance (very few apps require low latency audio - video editing, for example, uses huge buffers with relatively high latency).  A lot of desktops offer a BIOS option to disable those sleep states but no such option exists in HP's BIOS for that laptop.  In theory you can do it from within Windows but it doesn't have an effect on my system.  That might be one of those options that Windows allows you to change but that actually has no effect. One workaround is to disable CPU throttling.  That makes the CPU run at full speed all the time and, I believe, also disables the PCIe and other sleep states.  When I disable CPU throttling, DPC latency goes back to normal.  However, the CPU is then running full-speed all the time so your battery life basically goes to nothing and the laptop gets *very* hot. Clearly that is not necessary because the laptop runs fine from the SATA SSD.  HP needs to fix the latency problem associated with the PCIe drive. The next logical step is to provide a BIOS update that provides a way to disable the PCIe sleep states without disabling CPU throttling, like on many desktop systems.  The bad news is that HP tech support is not very technical, so it takes forever for them to figure out what I'm talking about.  It took a couple months for them to start using the DPC Latency checker. Hopefully there will be a fix at some point... in the meantime, I hope that HP sends me a check for spending so much time educating their techs on how computers work.  And for countless hours lost re-installing different OSes only to show that the performance is exactly the same as shown in the DPC Latency checker. rgames

  • OpenGL/Elite3D Performance in Solaris 10?

    Hi Folks,
    I've searched but can't seem to find anything on this.
    I have an Ultra-2 with 2x300MHz, 640MB of RAM, and an Elite3d-M6 framebuffer. Life is good, and Solaris 10 is great.
    But - I use a brain modeling application that converts MRI images of the skull to 3d maps of the brain. Under Solaris 9, performance was awesome - fully accelerated, very smooth rotation and scrolling of the 3d models.
    Using the same software under solaris 10, its graphics performance is very poor. I've also noticed that the OpenGL plugin for Xmms (which ran great under Solaris 9) performs terribly in Solaris 10. The problem is the same in both CDE and JDS.
    To me, this says 'OpenGL' problem. But I've done a full Solaris 10 install (and several re-installs, for other reasons), and performance is equally poor with all versions. OpenGL really seems to be installed and working properly, but performance is really bad. Does anyone have any ideas about this one?
    Thanks,
    tim

    I found out about Update 4 very shortly after posting this, and I have now upgraded. Unfortunately, the problem still persists.

  • HD 3870 poor opengl performance

    For some reason, XBench is giving me some really low opengl scores. Any ideas what the problem might be? I'm getting scores of like 86 when other with the same card are getting in the 200's.

    It is influenced by cpu I suspect, and xbench is pretty.... well, not a real benchmark tool.
    Games and video performance, frames, and things like Cinebench10 are also heavily influenced by other factors.

  • Poor OpenGL performance with my laptop

    Hello.
    I have installed Archlinux on my nvidia desktop computer and I can play opengl games smoothly. But the problem is with my laptop. I remember, long time ago , when opengl ran pretty well with linux compared to windows. Since one month ago, I've had windows and Arch on my laptop, when I tried to play Openarena, it ran much better on windows than linux, what has happened to intel driver? Now I only have linux and want to continue playing and it's impossible with that performance.
    I've read many posts and I've failed in my purpose, I'm missing something...
    $ lspci | grep -i vga
    00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
    Thanks in advance.

    cocotero wrote:Yes, KMS is enabled. I've read  that the better solution is to downgrade some xorg related packages. Is it impossible to use intel driver on 3 or more years old laptops? I mean, with good performance.
    Unfortunately  yes, I have seen since I use linux (like five years) that once a intel video card is not supported any more there is no going back to its optimum state, also the problem with downgrading packages is that at some point you would not be able to update other software any more with out issues.

  • Help!  Poor network (LAN) performance with T2000, Solaris 10, GB

    Hi-
    I'm having a problem that I'm hoping someone can help me with.
    We've recently purchased a few of the T2000 boxes, with the ipge (GB) cards in them.
    The problem we're having is that we can't seem to get more than 500 mbit/s performance out of them, at least the way we're measuring them.
    So, I have 2 questions... (a) how can we validate that we are in fact getting GB performance for file downloads, etc. and (b) is there something that we have to do in order to accomplish this if our readings are correct?
    Background:
    What we've done so far is as follows:
    Validate that the connection is indeed GB, full duplex.
    root@conn1 # dladm show-dev
    ipge0 link: unknown speed: 1000 Mbps duplex: full
    ipge1 link: unknown speed: 0 Mbps duplex: unknown
    ipge2 link: unknown speed: 0 Mbps duplex: unknown
    ipge3 link: unknown speed: 0 Mbps duplex: unknown
    root@conn1 #
    We've run a crossover cable between 2 of them to eliminate any "switch" issues from the equation.
    We then ran iperf, as a server on one of them, as a client on the other.
    Server Output:
    root@conn2 # iperf -s
    Server listening on TCP port 5001
    TCP window size: 48.0 KByte (default)
    [  4] local 192.168.1.215 port 5001 connected with 192.168.1.71 port 32795
    [ ID] Interval Transfer Bandwidth
    [  4] 0.0-10.0 sec 598 MBytes 502 Mbits/sec
    Client output:
    root@conn1 # iperf -c 192.168.1.215 -i 1
    Client connecting to 192.168.1.215, TCP port 5001
    TCP window size: 48.0 KByte (default)
    [  3] local 192.168.1.71 port 32795 connected with 192.168.1.215 port 5001
    [ ID] Interval Transfer Bandwidth
    [  3] 0.0- 1.0 sec 57.8 MBytes 485 Mbits/sec
    [  3] 1.0- 2.0 sec 59.9 MBytes 502 Mbits/sec
    [  3] 2.0- 3.0 sec 60.1 MBytes 504 Mbits/sec
    [  3] 3.0- 4.0 sec 59.7 MBytes 501 Mbits/sec
    [  3] 4.0- 5.0 sec 59.9 MBytes 502 Mbits/sec
    [  3] 5.0- 6.0 sec 60.7 MBytes 509 Mbits/sec
    [  3] 6.0- 7.0 sec 60.2 MBytes 505 Mbits/sec
    [  3] 7.0- 8.0 sec 59.9 MBytes 503 Mbits/sec
    [  3] 8.0- 9.0 sec 59.9 MBytes 503 Mbits/sec
    [  3] 9.0-10.0 sec 59.5 MBytes 499 Mbits/sec
    [  3] 0.0-10.0 sec 598 MBytes 501 Mbits/sec
    root@conn1 #
    Thoughts? Help!?
    ...jeff

    Hi,
    We are having issues with our T2000s as well. Same setup. t2000s back to back with a crossover connection. Best we are getting with a single iperf test is..
    # iperf -c 192.168.1.2 -i 1
    Client connecting to 192.168.1.2, TCP port 5001
    TCP window size: 2.00 MByte (default)
    [  3] local 192.168.1.1 port 52643 connected with 192.168.1.2 port 5001
    [ ID] Interval Transfer Bandwidth
    [  3] 0.0- 1.0 sec 27.7 MBytes 232 Mbits/sec
    [  3] 1.0- 2.0 sec 28.8 MBytes 242 Mbits/sec
    [  3] 2.0- 3.0 sec 30.3 MBytes 254 Mbits/sec
    [  3] 3.0- 4.0 sec 26.5 MBytes 222 Mbits/sec
    [  3] 4.0- 5.0 sec 24.3 MBytes 204 Mbits/sec
    [  3] 5.0- 6.0 sec 28.9 MBytes 242 Mbits/sec
    [  3] 6.0- 7.0 sec 28.7 MBytes 241 Mbits/sec
    [  3] 7.0- 8.0 sec 30.6 MBytes 257 Mbits/sec
    [  3] 8.0- 9.0 sec 30.8 MBytes 259 Mbits/sec
    [  3] 9.0-10.0 sec 26.8 MBytes 225 Mbits/sec
    [  3] 0.0-10.0 sec 283 MBytes 237 Mbits/sec
    [  4] local 192.168.1.2 port 5001 connected with 192.168.1.1 port 52643
    [ ID] Interval Transfer Bandwidth
    [  4] 0.0-10.0 sec 283 MBytes 238 Mbits/sec
    Did you ever get an answer to your questions below? Would you be willing to share what you found?
    What settings do you have to get the 500 Mbits/sec? Our servers are fairly well loaded so we think it's just fighting for CPU with everything else. If you run many streams your performance improves a great deal. We've gotten ~800Mbits/sec running 5 or more iperf runs at the same time.
    If you run "intrstat" while you do the single or many iperf runs what does your CPU peg out at? We get about %11 cpu useage from a single iperf run and %35 from 2 CPUs when we do many iperf runs at the same time.
    Thanks,
    -Matt

  • Performance on Solaris 10 - Operating system paging

    Has anyone experience performance issues after an upgrade from Solaris 9 to Solaris 10 on SAP systems with limited memory?
    We have many systems that are on servers with 4 Gig of memory and ran well on Solaris 9.  After an upgrade to Solaris 10 we are experiencing very high OS system paging rates.  The response times of the SAP systems are very poor when this occurs.  It seems to take very little load to cause this.
    I realize more memory or decreases in Oracle or SAP memory parms will solve this but am wondering if there is anything on the Solaris OS that could resolve this?
    Thanks,
    Dan

    DISM can be used but in global zone only (according to Sun document "Best Practive for Running Database in Solaris Containers" , the  proc_lock_memory privilege which is required to run the ora_dism_ process is not available in non globale zone)
    The doc i got is from 2005, so don't know if the Sun recommendations has been updated since then.
    In order to activate DISM (if you are in a global zone), sga_max_size should be set up larger than the sum of sga components: db_cache_size, shared_pool_size ...)
    Also look for the Sun Blueprint "Dynamic Reconfiguration and Oracle 9i Dynamic Resizable SGA" on http://www.sun.com/blueprints
    If you use ISM because in a non-global zone, you can use oracle parameter lock_sga to ensure the SGA is loaded into the RAM and useism_for_pga = true to ensure PGA is loaded into the RAM.
    Make sure you have enough RAM to hold filesystem cache (OS memory) , oracle memory, and applications memory
    Make sur your PGA and SGA are correctly sized size, since you won't be able to dynamically change the ISM allocation. (see v$shared_pool_advice, v$db_cache_advice, v$pga_target_advice ...)
    Take the usual precautions:
    - have a successfull backup first
    - do the change on a test machine
    - and/or ask your vendor before proceeding
    Other Doc to read ...
    Note 697483 - "Oracle Dynamic SGA on Solaris"  (recommends to read Sun doc n°230653)
    Note 724713 - parameter settings for Solaris 10, here is an extract :
    Only one parameter from SAP note 395438 should remain in file
    etc/system
    set rlim_fd_cur=8192
    As described in SunSolve document 215536, the "Large Page Out Of the Box" (LPOOB) feature of the Solaris 10 memory management, first implemented in Solaris 10 1/06 (Update 1), can lead to performance problems when the system runs out of large chunks of free contiguous memory. This is because the Solaris 10 kernel by default will attempt to relocate memory pages to free up space for creating larger blocks of contiguous memory. Known symptoms are high %system CPU time in vmstat, high number of cross calls in mpstat, and Oracle calling mmap(2) to /dev/zero for getting more memory.
    Memory page relocation for satisfying large page allocation requests can be disabled by setting the following Solaris kernel parameter in /etc/system
    set pg_contig_disable=1
    This will not switch off the LPOOB feature. Large memory pages will still be used when enough free space of contiguous memory is available, so the benefits of this feature will remain
    Note 870652 - Installation of SAP in a Solaris Zone
    Note 1246022 - Support for SAP applications in Solaris Zones
    Edited by: Emmanuel TCHENG on Oct 13, 2009 12:02 PM

  • My MacBook Pro Retina has suffered from poor (temperamental) safari performance since purchase.

    Because I often use my iPad or iPhone at home the safari issue hasn't really bothered me until recently and now, rather than just being temperamental, performance is just downright poor. It's taken well over 5 minutes to download ordinary BBC web pages on a 100Mbit broadband network. This has become typical  and now my kaspersky for mac security can no longer update its database - really annoyed at the prospect of having to get a new machine as I ordered the 2.7 GHz that took over 6 weeks to ship. Still within 6 months of my initial apple care.
    Any help would be appreciated.
    Thanks
    E

    Hello Poikkeus,
    Thanks for taking an interest and offering to help. Unfortunately, my Mac doesn't seem to be able to download any of the versions of istat citing that safari can't find the server - no luck using chrome either. Safari performance does improve with restarting the machine, however, I really don't want to restart my machine every time I use it; I had hoped that I could open my machine from sleep after using it the previous day without any issues. Is this a reasonable assumption? Occasionally I experience poor performance when using my work PC if I hibernate or log off rather than shut down, however, I expected better from my Mac.
    Furthermore, Kaspersky have just informed me that the error messages I receive when my security database fails to update indicates that there is a problem with my broadband connection. However, as I mentioned earlier, I have 100MBit broadband and experience no issues when using my work PC, iPad or iPhone.
    I'm guessing I'll have to follow your initial piece of advice and visit the Genius Bar to have them resolve the issue...
    Many Thanks

Maybe you are looking for