Nvidia 5200, how is the performance?

Currently, I have a Radeon 9500 Pro video card, and no matter which driver I use (catalyst or open source), it freezes my computer dead.  In Ubuntu this really only happened when I did something with OpenGL (even glxgears) but since I've switched to Arch it has happened even while loading KDE.  Right now, I don't even have a xorg.conf file because that's the only way I can get my computer to load a GUI (I'm guessing that it is defaulting to using vesa drivers?)
The point is that my friend is bringing me an Nvidia 5200 video card and I'm going to use that, but I'm wondering how the performance is.  I'm pretty sure it's gonna render things faster than my vesa'd 9500 Pro, but I know that my video card is actually much faster if I could ever have gotten it to work properly.  Does the 5200 support compositing, for example, without being slow?  Is having a slow video card going to hurt me in someway (I do OpenGL programming, but only small things).  Also, my current card is hooked up to my LCD via DVI, but the 5200 only has VGA outputs; how much of a degrade in quality will I see on my LCD?
Thanks for the help.

Alot of people on the forums seem to be having trouble with ATI cards recently so I'm glad I bought an nvidia card to replace an old ATI Rage card awhile ago.  Apparently nvidia and linux are a tad more friendly with each other.  I certainly had no problems setting up my card.  The 5200 is a little older than my 6200 and I'm not sure what effect the vga out is going to have but I'm sure you'll be happy with the nvidia card.
After a recent nvidia driver upgrade I had to switch back to the nvidia-96xx driver for games to work properly so I'd say that you most likely will not be able to run the latest nvidia driver.

Similar Messages

  • Gnome-shell with nouveau - how is the performance

    I have an old 8400GS and find gnome-shell's performance under nouveau pretty bad.  Is it my card or the driver?  If you have a more powerful card, how is the performance?  I'm no gamer but is there a more powerful (none fan cooled) nvidia card that's better?

    Looks okay.... guess my 8400GS is just underpowered for this driver.  I ordered the 450 GTS so we'll see
    $ grep flip /var/log/Xorg.0.log
    [ 23641.419] (==) NOUVEAU(0): Page flipping enabled
    $ grep drm /var/log/Xorg.0.log
    [ 23641.418] drmOpenDevice: node name is /dev/dri/card0
    [ 23641.418] drmOpenDevice: open result is 7, (OK)
    [ 23641.418] drmOpenByBusid: Searching for BusID pci:0000:01:00.0
    [ 23641.418] drmOpenDevice: node name is /dev/dri/card0
    [ 23641.418] drmOpenDevice: open result is 7, (OK)
    [ 23641.419] drmOpenByBusid: drmOpenMinor returns 7
    [ 23641.419] drmOpenByBusid: drmGetBusid reports pci:0000:01:00.0
    [ 23641.419] (II) [drm] nouveau interface version: 0.0.16
    [ 23641.419] drmOpenDevice: node name is /dev/dri/card0
    [ 23641.419] drmOpenDevice: open result is 8, (OK)
    [ 23641.419] drmOpenDevice: node name is /dev/dri/card0
    [ 23641.419] drmOpenDevice: open result is 8, (OK)
    [ 23641.419] drmOpenByBusid: Searching for BusID pci:0000:01:00.0
    [ 23641.419] drmOpenDevice: node name is /dev/dri/card0
    [ 23641.419] drmOpenDevice: open result is 8, (OK)
    [ 23641.419] drmOpenByBusid: drmOpenMinor returns 8
    [ 23641.419] drmOpenByBusid: drmGetBusid reports pci:0000:01:00.0
    [ 23641.419] (II) [drm] DRM interface version 1.4
    [ 23641.419] (II) [drm] DRM open master succeeded.
    $ dmesg | grep drm
    [ 0.651085] [drm] Initialized drm 1.1.0 20060810
    [ 0.661177] [drm] nouveau 0000:01:00.0: Detected an NV50 generation card (0x298200a2)
    [ 0.664916] [drm] nouveau 0000:01:00.0: Attempting to load BIOS image from PRAMIN
    [ 0.710278] [drm] nouveau 0000:01:00.0: ... appears to be valid
    [ 0.710280] [drm] nouveau 0000:01:00.0: BIT BIOS found
    [ 0.710281] [drm] nouveau 0000:01:00.0: Bios version 62.98.47.00
    [ 0.710283] [drm] nouveau 0000:01:00.0: TMDS table version 2.0
    [ 0.710284] [drm] nouveau 0000:01:00.0: Found Display Configuration Block version 4.0
    [ 0.710286] [drm] nouveau 0000:01:00.0: Raw DCB entry 0: 02000300 00000028
    [ 0.710287] [drm] nouveau 0000:01:00.0: Raw DCB entry 1: 01000302 00020030
    [ 0.710289] [drm] nouveau 0000:01:00.0: Raw DCB entry 2: 04011310 00000028
    [ 0.710290] [drm] nouveau 0000:01:00.0: Raw DCB entry 3: 010223f1 00c0c080
    [ 0.710292] [drm] nouveau 0000:01:00.0: DCB connector table: VHER 0x40 5 16 4
    [ 0.710294] [drm] nouveau 0000:01:00.0: 0: 0x00001030: type 0x30 idx 0 tag 0x07
    [ 0.710296] [drm] nouveau 0000:01:00.0: 1: 0x00000200: type 0x00 idx 1 tag 0xff
    [ 0.710297] [drm] nouveau 0000:01:00.0: 2: 0x00000110: type 0x10 idx 2 tag 0xff
    [ 0.710299] [drm] nouveau 0000:01:00.0: 3: 0x00000111: type 0x11 idx 3 tag 0xff
    [ 0.710300] [drm] nouveau 0000:01:00.0: 4: 0x00000113: type 0x13 idx 4 tag 0xff
    [ 0.710303] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table 0 at offset 0xD710
    [ 0.735573] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table 1 at offset 0xDAB5
    [ 0.741906] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table 2 at offset 0xE364
    [ 0.741911] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table 3 at offset 0xE456
    [ 0.742976] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table 4 at offset 0xE655
    [ 0.742977] [drm] nouveau 0000:01:00.0: Parsing VBIOS init table at offset 0xE6BA
    [ 0.762981] [drm] nouveau 0000:01:00.0: 0xE6BA: Condition still not met after 20ms, skipping following opcodes
    [ 0.762989] [drm] nouveau 0000:01:00.0: timingset 255 does not exist
    [ 1.160045] [drm] nouveau 0000:01:00.0: 1 available performance level(s)
    [ 1.160047] [drm] nouveau 0000:01:00.0: 3: memory 400MHz core 567MHz shader 1400MHz fanspeed 100%
    [ 1.160055] [drm] nouveau 0000:01:00.0: Register 0x00004030 not found in PLL limits table
    [ 1.160090] [drm] nouveau 0000:01:00.0: c: memory 399MHz core 566MHz shader 1400MHz
    [ 1.160209] [drm] nouveau 0000:01:00.0: Detected 512MiB VRAM
    [ 1.161277] [drm] nouveau 0000:01:00.0: 512 MiB GART (aperture)
    [ 1.173455] [drm] nouveau 0000:01:00.0: DCB encoder 1 unknown
    [ 1.173457] [drm] nouveau 0000:01:00.0: TV-1 has no encoders, removing
    [ 1.174154] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
    [ 1.174155] [drm] No driver support for vblank timestamp query.
    [ 1.428713] [drm] nouveau 0000:01:00.0: allocated 1920x1080 fb: 0x40000000, bo ffff880221feec00
    [ 1.483825] drm: registered panic notifier
    [ 1.483828] [drm] Initialized nouveau 0.0.16 20090420 for 0000:01:00.0 on minor 0

  • ADF 10g on JBOSS, how is the performance & stability ?

    Hi All,
    We are considering to use JBOSS for our ADF 10g APplication, has anybody here using it ? how is the performance & stability ?
    Thank you,
    xtanto

    think it's this one New beta bios for s939 K8N Neo2 Platinum added (7025NMS.151) see syar's comment in next post.
    If there are any more recent betas than that try them, the link to syar's d/load page is in thread.
    don't know - check the threads
    don't think so - check the threads
    luck

  • How is the performance maxtor 300gb 16mb sata on neo2 k8n?

    How is the performance maxtor 300gb 16mb sata on neo2 k8n? what bios should i use for this and does sata 3&4 port work with the oc speed over 240?
    do i have to update to sp2 to detect all the space?

    think it's this one New beta bios for s939 K8N Neo2 Platinum added (7025NMS.151) see syar's comment in next post.
    If there are any more recent betas than that try them, the link to syar's d/load page is in thread.
    don't know - check the threads
    don't think so - check the threads
    luck

  • How is the performance of Mac Pro if i use it as host for windows and linux virtual machines.

    How is the performance of Mac Pro if i use it as host for windows and linux virtual machines.
    I am planning to buy a high performance PC to run my Windows and Linux servers as vitrual machines for my testing purposes.
    Initially i planned to build my own computer with recommended configurations but considering space constaints and cooling factors i think Mac Pro can be a choice. But need some inputs if Mac pro (Intel Xeon E5, 12 GB RAM) is good for running virtual Machines.

    You could even run Windows natively and still run your VM servers.
    I have seen reports and such on MacRumors and elsewhere - run Windows natively as well as VMs (can also do testing and run Mavericks in a VM under Mavericks)
    The fast internal PCIe-SSD, plus 6 or 8 cores, and 32-64GB RAM. Of course for $5,000 for 8-core, some Thunderbolt storage and 32GB/64GB RAM you can buy some serious hardware.

  • How is the performance for HashMap.get()

    hi all
    i just want to ask you guys how quick the hashmap.get() runs for a given key, if there are thousand of entries in the hashmap. thanks

    If you want to know how something is going to perform, write a tiny program and test it. Either put it on a profiler, or just put time markers around a big loop. Here's something that somebody (UJ, I think) came up with recently for comparing casting vs. toString after extracting from a Map. It should be easy to adapt to just the Map.get() timing. In fact, I think I added a chunk to do just that, as a control. Knock yourself out. import java.util.*;
    public class MapTiming {
        Random rnd = new Random(13L);
        public static void main(String[] args) {
            int numEntries = Integer.parseInt(args[0]);
            Integer key = Integer.valueOf(args[1]);
            new MapTiming().zillion2(numEntries, key);
        void zillion2(int numEntries, Integer key) {
            HashMap map = new HashMap();
            int ix;
            for (ix = 0; ix < numEntries; ix++) {
                int entry = rnd.nextInt();
                map.put(new Integer(entry), String.valueOf(ix));
            map.put(key, String.valueOf(ix));
            final int LOOPS = 10000000;
            final int TIMES = 5;
            long time;
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    String item = map.get(key).toString();
                System.out.println("toString= " + (System.currentTimeMillis()-time));
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    String item = (String)map.get(key);
                System.out.println("cast= " + (System.currentTimeMillis()-time));
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    Object obj = map.get(key);
                System.out.println("get= " + (System.currentTimeMillis()-time));
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    String item = map.get(key).toString();
                System.out.println("toString= " + (System.currentTimeMillis()-time));
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    String item = (String)map.get(key);
                System.out.println("cast= " + (System.currentTimeMillis()-time));
            for (int j=0; j<TIMES; j++) {
                time = System.currentTimeMillis();
                for (int i=0;i<LOOPS; i++) {
                    Object obj = map.get(key);
                System.out.println("get= " + (System.currentTimeMillis()-time));
    }

  • In sap bw with hana, How exactly the performance is improved ?

    By replacing oracle db with hana , are we installing new processors as well. Or is the performance we get is purely because of columnar storage ?

    Hi Mohd,
    This is a good question. Just replacing the DB or columnar store doesn't help.
    If HANA DB is just a data base on memory then everyone can just add more memory for existing DB or columnar store then most DB already offer columnar store ( Like IBM DB2 Blu Accelerator or MS Sql columnar, etc) 
    It is a combination factor of HANA DB which optimised for both read and write operations along columnar store and data compression. In case of HANA DB sap is trying to push most of the process to database.
    All new BW modeling techniques ( ADSO or New Composite Provider) will work only with  HANA DB.
    General points why BW on HANA is more efficient
    1) No DIM ID in cube
    2) Change log tables can be converted into views ( optional )
    3) DSO Activation Pushed to DB
    4) Transformation Pushed to DB if no ABAP routines.
    5) No need of index for cubes
    6) Aggregations is no more required.
    Thanks,
    Shakthi Raj Natarajan.

  • How about the performance of AIR 2.6 in iOS/Android?

    Does anyone test the performance improvement of AIR 2.6 in iOS/Android? Could you share the result? It's pretty good if you can release your testing code. It gives me a headache to update my AIR 2.5 in Flash CS5.

    Today, I created the AIR 2.6 environment to package my swf with Flash CS5. It gave me a big surprise, the performance became worse. Oh my god, who can save me?
    Command
    "C:\Program Files (x86)\Java\jre6\bin\java" -jar "I:\Software\Adobe\AdobeAIRSDK2.6\lib\adt.jar" -package -target ipa-ad-hoc -storetype pkcs12 -keystore "F:\Work\Projects\Flash\packages\iOS_Certification\p12.p12" -storepass 1234 -provisioning-profile "F:\Work\Projects\Flash\packages\iOS_Certification\mobileprovision.mobileprovision" A.ipa A-app.xml A.swf icon29.png icon57.png icon512.png icon48_iPad.png icon72_iPad.png
    A-app.xml
      <initialWindow>
        <content>A.swf</content>
        <systemChrome>standard</systemChrome>
        <transparent>false</transparent>
        <visible>true</visible>
        <fullScreen>false</fullScreen>
        <aspectRatio>landscape</aspectRatio>
        <renderMode>gpu</renderMode>
        <autoOrients>false</autoOrients>
      </initialWindow>
      <icon>
    ...  </icon>
      <customUpdateUI>false</customUpdateUI>
      <allowBrowserInvocation>false</allowBrowserInvocation>
      <iPhone>
        <InfoAdditions>
          <![CDATA[<key>UIDeviceFamily</key><array><string>2</string></array>]]>
        </InfoAdditions>
      </iPhone>

  • How is the performance of Oracle Teradata Gateway?

    Hi,
    We need to speed up the Teradata extraction and load to Oracle tables on HPUX. The total volume tops about 200GB. Will using Oracle Teradata Gateway help data transfer and load perfomance? Currently we download first from teradata and then load using SQLLOADER.
    Thanks.

    You could even run Windows natively and still run your VM servers.
    I have seen reports and such on MacRumors and elsewhere - run Windows natively as well as VMs (can also do testing and run Mavericks in a VM under Mavericks)
    The fast internal PCIe-SSD, plus 6 or 8 cores, and 32-64GB RAM. Of course for $5,000 for 8-core, some Thunderbolt storage and 32GB/64GB RAM you can buy some serious hardware.

  • Does anyone know how well the Intel Iris Pro installed on new 15" MacBook Pros performs using Photoshop and Lightroom. I have seen some differing opinions out there, and I would rather not shell out the extra cash for the Nvidia if I don't have to. I most

    Does anyone know how well the Intel Iris Pro installed on new 15" MacBook Pros performs using Photoshop and Lightroom?  I have seen some differing opinions out there, and I would rather not shell out the extra cash for the Nvidia if I don't have to. I mostly do photo editing for business and personal use. I have not used the 3D function in Photoshop, but I would like to know that I could.

    You could download a trial and see how well it works before committing to a subscription. You get 30 days to decide.
    Photo editor | Download free Adobe Photoshop CC trial
    Photo editor app | Download free Adobe Photoshop Lightroom 5 trial
    Gene

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • How to Improve the Performance of SQL Server and/or the hardware it resides on?

    There's a particular stored procedure I call from my ASP.NET 4.0 Web Forms app that generates the data for a report.  Using SQL Server Management Studio, I did some benchmarking today and found some interesting results:
    FYI SQL Server Express 2014 and the same DB reside on both computers involved with the test:
    My laptop is a 3 year old i7 computer with 8GB of RAM.  It's fine but one would no longer consider it a "speed demon" compared to what's available today.  The query consistently took 30 - 33 seconds.
    My client's server has an Intel Xeon 5670 Processor and 12GB of RAM.  That seems like pretty good specs.  However, the query consistently took between 120 - 135 seconds to complete ... about 4 times what my laptop did!
    I was very surprised by how slow the server was.  Considering that it's also set to host IIS to run my web app, this is a major concern for me.   
    If you were in my shoes, what would be the top 3 - 5 things you'd recommend looking at on the server and/or SQL Server to try to boost its performance?
    Robert

    What else runs on the server besides IIS and SQL ? Is it used for other things except the database and IIS ?
    Is IIS causing a lot of I/O or CPU usage ?
    Is there a max limit set for memory usage on SQL Server ? There SHOULD be and since you're using IIS too you need to keep more memory free for that too.
    How is the memory pressure (check PLE counter) and post results.
    SELECT [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
    Check the error log and the event viewer maybe something bad there.
    Check the indexes for fragmenation, see if the statistics are up to date (and enable trace flag 2371 if you have large tables > 1 million rows)
    Is there an antivirus present on the server ? Do you have SQL processes/services/directories as exceptions ?
    There are lot of unknowns, you should run at least profiler and post results to see what goes on while you're having slow responses.
    "If there's nothing wrong with me, maybe there's something wrong with the universe!"

  • How to improve the performance of adobe forms

    Hi,
    Please give me some suggestions as to how to improve the performance of adobe form?
    Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
    one it is hanging.
    I read about Wizard form design approach, how to use the same here.
    Thanks,
    Aravind

    Hi Otto,
    The form is created using HCM forms and processes. I' am performing user events in the form.
    User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
    happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
    the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
    re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
    I was reading ways to improve performance during re-rendering given below.
    http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
    It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
    Let me know if you need further details.
    Thanks,
    Aravind

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How can I fine tune the performance of my IMS5.1 mailserver?

    I installed the IMS5.1 on Solaris 8 with default parameters, without IDA. It is used as a mail relay. It seems to have/keep about 700 msgs in the tcp_local channel but none in the process channel. It uses the cpu very much, in my opinion too much (100% is no exception. It uses the swap file for only 30%. How can I tune the performance of my system. Don't laugh: the "server" is only a SUN Ultra 5 workstation.

    I've been working with this MTA since '95. Unfortunately there is no easy answer. The number of msgs in queue is not an indication of performance, it can be, but it can also be that the hosts your system is trying to reach are not available. You can use tools like imsimta qtop to see top subjects or top domains. Poke around and see just why you have 700 msgs in your queues.
    Channels like process or say conversion channel are internal while channnels like tcp_local deal with external systems. If you had mail backing up in the conversion channel then you'd have a good sign of local performance problems. Mail backing up in tcp_local is not necessarily a sign of performance problems on your end.
    I don't see a problem with the software using all available CPU. What is wrong with that?
    If you've made any changes to the configuration it could be that you have introduced something that is causing say a mapping process to loop and thus eat more CPU that would otherwise be normal.
    What process is using all of this CPU? Knowing this would help to determine what part of the MTA might be using lots of CPU.

Maybe you are looking for

  • Connecting T510 to LG 42PG6000

    Hi Can you please lt me know what are my options to connect my laptop to LG tv and watch movies? Appeared that T510 doesn't have either HDMi or S-Video ports... Thanks in  advance

  • Cover Flow image different from

    Most of the time, the album artwork in Cover Flow is identical to that on the thumbnail image in the bottom left corner of my screen (sorry - don't know the correct term for it). However, on occasional tracks they're different. I think I know why, bu

  • H330 wont boot windows 7 from hdd or dvd

    When booting from HDD the pc recgonises a fault in windows and says insert boot disc and repair computer. when i change bios to boot from disc freeDos appears asking for time and date. when entered A:\> appears and anything typed is bad filename or b

  • Populated the new field with historic data

    HI, I have a data for 2 years now I am enhancing the data source how can I populated the new field with historic data.Is it by 1)deleting all the data in BW side and the doing an init then setting up a regular delta 2)running a repair full request wi

  • Help!! ps cs hanging a lot -- esp in browser

    Hi All, I am running photoshop cs on an imac g5 osx10.4.11.... my daughter managed to switch of my power twice today, and now photoshop is hanging.... I have reset pram and smu and repaired permissions... when I started photoshop after the second swi