Opteron vs Athlon 64 in Raw Power terms

Hi, guys.
I've my first experience dealing with opteron.
I've bought opteron 246, server mobo (not MSI, it's Tyan, I'm not sure if MSI Master series available here), ECC Reg RAM... phew, the price is very different with desktop PC component... 
But it's not the problem (it's for my client however, not for my daily use...), and there's actually no problem  , I just want to ask some questions.
The question are: (this is for CPU raw power, not desktop vs server mobo features, nor about multi CPU support, ECC Reg / std. RAM, etc).
1. With same core (and same cache, etc) is Opteron has advantage over Athlon 64?
2. Is Opteron faster or/and have some additional structure/code that optimized for server task?
3. If I build on soket 939 (there's some Opteron 939 on the market), is Opteron have advantage over Athlon 64 (with same core, same cache, etc)?
4. Which will perform faster: dual CPU or dual core CPU? (don't say dual dual core CPU...    ) Err... for AMD, I don't care about Xeon 
Thanks!

Thanks, emm, yes it's gonna be a pure server, running sql server 2000 sp4 on win server 2003.
My friend reported (from his client) that opteron need some patches for some applications, and it's easier for xeon setup in terms of software compatibility.
Well, I doubt it ("easier for xeon setup..." ?!)...    but if it's true, then it can lead for some problems in future (as there's no admin to handle this server, I set it up, and they won't touch it).
Bas, for now it's only using 1 CPU (246), if I'm gonna upgrade to dual, does it need same batch (I know it need another 246) ? My friend said xeon have some issues with different batch.
Thanks, guys.
nb: It'll wonderfull if I have dual dual core opteron 280 for my workstation...    , wonder if the price is below $1000... but it'll overpower however.

Similar Messages

  • Why Preview doesn't show always power terms (m^2 etc) in pdf?

    There is clear cap between letters but the number is missing.
    Like this one
    Between m an K should be a 2 but it has vanished for some reason. If I look same document with PC the power term is in it's right place.
    But sometimes it works. Like in this one:
    This is from same document but different page.
    And if power term is missing from a formula the answer will be wrong.
    I can not use this type of unreliable software. How can I fix this?
    (The whole document is here http://www.ym.fi/download/noname/%7B8C5C3B41-E127-4889-95B0-285E9223DEE6%7D/4046 8 )
    I'm using iMac SO X Yosemite 10.10.3.
    Have anyone noticed same problem?

    Thank  you for your reply.
    Indeed, I tried saving a PDF that had the "default appearance", it shows exactly the same. So it looks like it "can't" show the image.
    However, of course, I do want the image to be shown. How can I achieve this? How can I tell Adobe Reader (and other PDF viewers) I really really realllyyyyy want to see that image there? I believe I'm not doing anything really exotic, right? There must have been tons of other people that wanted to do the same thing, I guess?
                        var annotation = new PDFLinkAnnotation
                                    Action = new PDFUriAction(this.GetCodeUrl(code)),
                                    Contents = AnnotationIdentifier,
                                    Appearance = new PDFImageAppearance(bitmapImage),
                                    DisplayRectangle =
                                        new DisplayRectangle(
                                        barcodeLocationInfo.X,
                                        barcodeLocationInfo.Y,
                                        barcodeLocationInfo.Tsw,
                                        barcodeLocationInfo.Tsh),
                                    Flags = Flags
                        pdfPage.Annotations.Add(annotation);

  • Raw Search Terms for "Spotlight Comment" and "Locked" Status

    Is it possible to do a raw search for a specific Spotlight Comment. I know the option exists in the Other… dialog box, but I need to find files that do not contain comments which is not choice.
    Also, is there any way to find only files that are unlocked?

    If you go to Others and select Raw Query you can use it find items with a specific comment, and items without any comment. For example, if you have the comment "testme" in the Spotlight field you would paste this into the Raw Query field:
    kMDItemFinderComment == "testme"
    and all files with that comment would be found. To find files in a folder that contain NO comment at all, you would use this:
    kMDItemFinderComment != "*"
    Be warned though: when I tried this on a folder with only 5 items in it, 4 with comments and one without. it worked, returning the lone uncommented file, but it took an amazingly long time to do so (perhaps a minute, while the search for "testme" was instantaneous).
    The "File readable" and "File writeable" functions shown in Others, which presumably are what one would use to find locked files, simply do not work. This is not surprising since the information about whether a file is locked or not does not in fact appear in a file's metadata. I guess that will be a future addition. You can use find in Terminal to search for files that are locked/unlocked. Let me know if you really want to do this.
    Francine
    Francine
    Schwieder

  • How much Raw Power do I need?

    I'm not sure when I'm going to buy a Mac Pro but I hope its by the end of the year. I am not too familiar with computer specs and I usually go by the numbers.
    So I'm not sure what Mac Pro configurations I need. The lab in my school has Mac Pros (3.0 GHz) and that's what we'll use. I'm not sure how much RAM it has.
    My major is 3D graphics and animation. We use Maya, Photoshop, Painter IX, Flash, and a few other software. But I'm not sure how much power I really need. Same with the graphics card although that can be upgraded. And I think 2GB of RAM will be good.
    So yeah, help me out please. Thanks in advance.

    Given the end of the year is like over 11 months away you might want to ask this closer to the time you actually want to buy. The reasomn is simple, both hardware and software will be different so the answers you get now are possibly going to be different at the time you want to buy.

  • Is RAM More Critical Than Raw Power

    Hello! At the moment i'm running a 2.26 GHz MBP with 8 GB DDR3 RAM at 1067.
    A basic (no titles or fancy stuff) three minute video takes 15 min to render. I plan to do more extensive stuff in the neear future; like 15 minute news items, mini documentaries (half hour shows type of thing).
    I am looking at the new i7 Quad core MBPs with 16 GB RAM, as against a 12-core Mac Pro with 12 GB. We're not talking cost, here, but productivity.
    How do you think these machines, in "bare-bones"state, will stack up against each other right out of the box? will more RAM wiin out over speed?
    Thanx!

    VRAM is the most important thing i.e. your graphics card, followed by CPU and then followed by amount of RAM. Basically you want the most powerful system you can afford.

  • Choosing the Right Power Supply

    First things first. If you've got a poor-quality and/or faulty power supply, nothing else you do will work to solve your problems. Stick to the basics before you go further...The short answer is to buy a hi-powered, brand name supply, like the new ENERMAX line (430 W or higher) or ANTEC True550. Almost nothing else will do with today's computers. In over 30 years of electronic/computer service, I have found that 85% or more of problems were power-related.
    If you want to know more, read on...
    Choosing The Right Power Supply
    If you’re reading this, there’s a good chance that one of my colleagues or I believe that you could be experiencing problems with your power supply, based upon the symptoms you mentioned in your post, and provided you with this link. Relax, you’re not alone. In 30 years of electronic and computer troubleshooting, I’d say that the majority of the electronic, mainframe, mini, and microcomputer problems I’ve diagnosed and repaired were with the basic power the problematic device was receiving. The symptoms often included random reboots, crashing, the BSOD, lockups, etc.
    (As the national support technician for few major computer service companies, working US Defense contracts, I was often the person that had to fly in and correct the problem, or “walk through” the on-site technician as he closely followed my instructions. I achieved success in my career by carefully reading the manuals, knowing where to go for more information that was otherwise unavailable to me, and/or systematically troubleshooting until the problems were discovered and repaired. I never had the option of giving up.)
    The most overlooked component when building or upgrading a PC is the power supply unit (PSU). Some people use their old case and PSU when they upgrade. Some use the PSU that came with their new case. Some people even buy a new PSU. And most inexperienced builders all make the same mistake: The PSU that they’re trying to use is simply inadequate for the job.
    Suppose you’re upgrading to a new motherboard, CPU, ram, and video card, but still using the old case and PSU. It’s most likely that you’re upgrading in order to build a machine that is more powerful, faster, has a more colorful display, can number-crunch more quickly, play the latest games, etc. These gains in performance all have one thing in common: They require more raw power. However, have you thought about where that power comes from?
    Suppose you’re building a new system with a new case and PSU. Has it occurred to you that the company that you bought the case/PSU from might make more money if they skimp on the supply, even if the supply has a large wattage rating? Most bulk power supply manufacturers don’t make good PSU’s. They use older, cheaper technology, and slap on labels that represent the PSU’s peak outputs, and not their continuous output rating. These companies are intentionally misleading you in order to sell you an inferior product. Brands I avoid when building/repairing my friends’ and family’s computers: Allied, Q-Tec, Chieftech, and many others.
    For those of you who bought a power supply separately, did you know that you’re only supposed to run a power supply continuously at 30-70% (with 50% being optimal) of its continuous rating for maximum efficiency (which means less heat to you)? Most inexperienced builders either buy PSU’s that are matched to their equipment’s continuous power usage, or ones that are even less powerful than they need. Why? Because they’re trying to save money.
    I mean, what’s the fun in a power supply? You don’t get any games with it, there’s no more storage, hardly ever any more bells and whistles, etc. A power supply is boring, and it’s supposed to be, because it’s supposed to provide a stable, reliable platform upon which the rest of the equipment can easily access the amount of power it needs, and when it’s needed. In almost EVERY review of powers supplies, the same point is stressed: Better safe than sorry.
    But what does safe vs sorry mean? It can mean that you don’t have to waste money on the wrong PSU in the first place, but it can also mean that you don’t have to replace your expensive ram, CPU, video card, etc. NEEDLESSLY, or because your cheap PSU destroyed them. What? A cheap power supply can wreck your computer? YES IT CAN. A cheap power supply can cause thermal damage, not only from the heat it produces, but also the heat it can create in your components as well. RAM is especially sensitive to heat, and there’s RAM in your CPU, your video cards, and, well, your RAM too. A cheap switching power supply, run at its maximum, or peak, continuously can also destroy components by creating RF (Radio Frequency) signals on your power rails, signals which the components on your peripheral devices were not equipped to handle in the first place.
    So this begs the question, how does one choose the right power supply? I’ll illustrate this using my own PC as the example. This is my setup that I use for video processing:
    K7N2G-ILSR
    Athlon 2500+ Barton @ 2125Mhz
    AMD Retail Heatsink/Fan
    2 - 512MB DDR333 w/Thermaltake Spreaders (slot 1&3)
    MSI TV@nywhere Video Capture
    ATI Radeon 9600
    120GB Maxtor DiamondMax Plus 9 SATA
    30GB Quantum IDE
    TEAC DV-W50E DVD/CD-R/W
    BTC DVD-ROM Drive
    Artec CD-R/W
    Using this Power Supply Calculator link:
    http://www.jscustompcs.com/power_supply/
    I plug in all my equipment values, but some of this can be a little tricky. For example, since I often run the CPU like an XP 3000, I choose the 3000 as my processor; it’s the same chip run at the faster rate. I also choose the ATI Radeon video card, and I select the RAM wattage for 2 sticks of DDR. I also choose every card I have, like my video capture card, but I also select the boxes for the separate cards that correspond to the functions that my ILSR provides as well (and that I use), like sound, USB, Firewire, NIC, etc.  Although I use the onboard SATA controller, I don’t select the SCSI PCI card, because, in truth, I’ve probably made up for it by selecting all the other corresponding devices, including cards that the motherboard replaces. I check the boxes for the fans and drives I use, and I’m done, right?
    Not yet.
    I just remembered that I plan to upgrade soon, so I go back and change the values to reflect my impending changes. I mean, I want to make sure that I have enough power to begin with so that I don’t have to replace the power supply again, right?
    Ok. Done. I look at the bottom and see that it tells me that I need a 468 watt PSU. So a 480 watt supply will do, right? Wrong.
    Remember that, for efficiency, long-life, and less heat, you want your actual power consumption to fall between 30-70% of the PSU’s rating, so add 30% (minimum) to the 468, and you get 468 + (468*.30)= 608 Watts! Holy Cow!
    However, I’d only need a 608-Watt supply if I was using all the devices at once, and I don’t. But, in truth, with video and audio processing, I often get close when I process, burn, and monitor at the same time. (Hardcore gamers also get close a lot, as they blast the sound and push that video to its limits.) So, let’s take off 10% (maximum) of 608, for a total of 541 Watts.
    I need a 550 Watt supply, but not just ANY 550 watt PSU. I need a supply that can give me enough power on the critical 3.3, 5, and 12V rails combined. I also want a supply from a trusted, name-brand manufacturer, so I start hitting the many online reviews. Here are just two from Tom’s Hardware:
    http://www6.tomshardware.com/howto/20030609/index.html
    http://www6.tomshardware.com/howto/20021021/index.html
    Read these in their entirety. I didn’t post them because they’re pretty links.
    In the end, I chose Antec, because they’ve got the reputation, the recommendation, and because the Antec True550 has better specs than the rest of the 550 Watt competition. I also bought it from a reputable company I found on Pricegrabber.com, for the lowest price I could find, $95.00 shipped to my door. (In truth, I wanted two mini-redundant supplies, like the hospitals and military use, but they were too expensive.)
    The result? Not only are the random reboots, crashing, the BSOD, lockups, etc., gone like magic, but I also now have “peace of mind” in that whatever might happen to my equipment in the future, I know almost for certain that the PSU is NOT the problem. I also bought an UPS, because the East Coast Blackout proved to me that even the Antec True550 isn’t going to provide me any power for emergency shutdown if it doesn’t get its power from somewhere.
    Even if your problem doesn’t lie in the PSU completely, it gives you a GREAT platform for troubleshooting further. If you’re not reasonably certain that the supply is the cause, borrow one, or buy one that you can return once you’ve solved the problem. But, above all else, BUY THE RIGHT SUPPLY before you do anything else! Otherwise, you could be plugging and unplugging components, buying and blowing up expensive memory, and causing even further damage, until you give up or die.
    I mean, I assume you built your own system to enjoy “more bang for your buck,” right? What’s the fun of a random reboot in the middle of Unreal Tournament 2003?
    William Hopkins
    Former Staff Sergeant, USAF
    B.A., B.S., with Honors
    The University of California, San Diego
    [email protected]
    P.S. It should be noted that while Enermax, ThermalTake, Zalman, Fortron, and others make great PSU’s, and I compared and considered them, the Antec still won out overall in my critical evaluation, like it did in so may others’ reviews. You’d probably be ok if you went with another reputable manufacturer as listed above, but pick a supply that gives you at least 230 watts on the 3.3 and 5V lines combined, and still meets the 30% criteria as stated above. Remember, if the manufacturers don’t give you maximum combined specs up front, they’re untrustworthy right off the bat. With power supplies, you definitely end up getting what you pay for. Don’t say nobody warned you.
    P.P.S. Update! After recent developments, it looks like Enermax is the leader, but only the latest line of PSU's.

    Ok, as an electrical engineer...I have to step in here! LOL
    First, these amp rating are for 2 +12 rails. That is why you see a protection of around 15-18A on the +12 rail. That means each Rail is allowed up to 18A lets say for the new Enermax 1.2 version like the one I have.
    Now, Lets say 18A for 12V....well as you know the Abit NF7-S uses the 12V for powering the CPU.
    Lets say you have a Barton like me and you want it stable at around 2.4-2.5Ghz. You will have to put lets say around 2V to the cpu to get it stable at that kinda speed, specially if you have high FSB like I do. So 12V * 18Amps = 216W ....well the converter on the NFS-7 is really bad, its loss on the step down convertion is probably around 25% along with the PSU lost cuz its not running at 25oC (another 15%)....you will actually only get around 100-120W for the CPU.
    Now, if you go into Sandra and see how much a Barton eats up at 2.4Ghz you will see its around 110Watts.
    So, if you wanna push more, dont even think about it! Prime Power test fails and your +12 rail will drop as low as 11.60 Volts.
    Now, lets say you got yourself a AMD 64 bit chip and you wanna overclock it....I bet it will need more than 110Watts.
    So, what im saying is, dont buy nothing less than a 500 Watt PSU!
    You really need around 20-22 A on the main +12 along with really really good cooling on the case and PSU so it is running at a 100%.
    http://forums.amdmb.com/showindex.php?s=&threadid=287828
    i found this quite interesting especially the bit re the power loss turning the 12v into 1.6v or what ever cpu needs

  • New quad mac pro - RAW import time, 1:1 preview rendering speed

    Hi
    Does anyone own a new quad core (purchased new 2009) with 4 or 8 ram who may share
    processing times in lightroom 2.3?  How many seconds it takes to render a 5d mark2 raw
    or sraw1 file as 1:1 preview?  How many seconds for a raw to import?
    Trying to determine if a new quad core machine with 8 ram is sufficient to turn around workflow of
    processing 1000-2000 raws per project.
    Thank you,
    Ken

    I can't answer you question with regards to the 5D2 as I don't have one, but generally you will find that the current crop of MacPros is as good as it gets in power terms.
    More RAM is always advisable, if you are thinking of 8GB then you will probably find a use for it!
    Lightroom is multi-core aware and quite happy to use them, it likes lots of RAM especially for 20+ MPx files - maybe think of more than 8 if you can afford it.
    The disk speed is important as well - as I/O access is an important part of LR operation. So big fast drives help - with the MacPro you can get SAS drives - 10,000 rpm but limited (I think) to 300GB.

  • AMD Athlon XP 2800 (Barton Core) will not post at FSB 166 MHz.

    Per top of the board “stickys” I have read a great number of the post on this board and done my homework to provide as much technical info to help you help me. This should explain some of it’s lengthy nature.
    Aside from Memory, HDD and Video card every thing else has been unplugged from board and power or stripped out of slots to isolate the problem.
    Presently running BIOS: A6712 V. 1B
    *As seen on face of MoBo: KT4V MS-6712   Ver:10A
    *The Original Box bar code label: KT4V-L (601-6712-050)
    K7, KT400. 5.1 chnl. S/W Audio, LAN, D-Bracket 2.
    CPU: AMD Athlon AXDL 2800 DLV4D 
    Q334763K40034  AQYHA 0447APMW
    Chip lable
    Athlon XP (Barton) Low Power Processor (Model 10) 2083 MHz
    OPGA Organic Pin Grid Array 1.50 Volt max. Temp. 85°C
    512 KB L2 Cache 333 MHz System bus (166 MHz FSB) x 12.5
    Memory: x3- PC 2700 512 mb DDR 333 Super Talent
      And shows at post as running 333MHz. MemTested to 1,000% with first 1 stick, then all 3 sticks presently on bios A6712vms.1B0. at fsb 100MHz (aprox. 24hr.) I even installed and ran other AMD utillitys while MemTest was running to “stress” the system memory with zero errors. During this period I checked Windows Task Manager > Performance, CPU usage at 100%, PF usage 1.35gb. No Probs.
    Video card: Removed: MSI AGP-8X Ti4200- VTD8X 128 MB DDR 533MHz
    Replace with: PCI card (to rule out AGP card prob.) Gilmont, nVidia 64mb
    (also tried Matrox AGP 32mb card with no different results then above cards)
    Power Supply: L&C Model: LC-B350 ATX = 350 watt
          115/130v      B/4a      60/50 hz
          +3.3v/+5v   1+12v   -12/-5v    +5vsb
          28a   35a    16a    0.8a    0.3a   2a
       +3.3v combined load 200w
    Cooler: Slim Volcano 10+,   P/N A1671,  rated for up to AMD Athlon CP 3400+ & fresh heat sink compound. Freshly re-seated.
    All PCI  slots empty as I attempt to run bare bones to eliminate problems. (except now running Vido card in 1 slot)
    Floppy Drive: ribbon unplugged from board
    HDD-0: Seagate 8 gb master, (HDD tested w/no errors)
    O/S: Win. XP w/sp2
    HDD-1: slave/  unplugged from ribbon and power
    Sony CD-Rom: Unplugged from PCI slot and power
    SATA/RAID-usb 2.0 card and other HDD’s, removed and not included till Sys. runs at full potential.
    Exterior HDD and Burner beyond the scope here and not attached.
    O/S Win. XP-sp2
    BIOS I have tried: The original; Ver:10A, ran fine on Athlon XP 1600 (1400 MP) 3 years.
    •   Then with MSI Live Update 3: Installed v. 1c which couldn’t run FSB 166MHz (MSI:” Fixes system reports incorrect message for AMD 3200+ CPU.
    - Supports Sempron L2 512K 3000+ CPU.)
    However, v. 1.c did not work and would not post at all with the newly installed Barton AMD-XP 2800. [frankly this attempt was months ago and I do not remember whether it would post at the slower 100 and 133 FSB speeds, and also to muddle my memory further I was upgrading an A7N8X-D with a Barton XP 3000 and having similar problems on the ASUS,,, which is now running fine at full potential. ; - p
    •   Then I selected and flashed A6712 V 1.7 122602, boots and runs Win.XP-sp2 at FSB 100MHz & 133MHZ.  (MSI:” Supports AMD Barton XP2800+ (FSB333) CPU
    -Fixes system sometimes will have IDE CRC error   -Fixes D-bracket2+USB keyboard, cannot use keyboard on DOS mode.)”    which couldn’t run FSB 166MHz.
    I checked with the Tech. at the local shop where I purchased the KT4V MoBo. He suggested I try a different BIOS but he didn’t specify which one. He is a personal friend and admits to some bad luck and dead boards due to flashing bios so reluctant to mess with it.
    •   Now running:  A6712vms.1B0 – MSI hid this BIOS V. found here:
    •   http://www.msi.com.tw/program/support/bios/bos/spt_bos_detail.php?UID=362&kind=1
    •    On a different page than the 2 versions mentioned above. http://www.msicomputer.com/support/bios_result.asp
    •    No wonder people are frustrated and confused about different MoBo versions there ID #’s and the BIOS versions them selves as seen on this news board. MSI’s fault, no fault of anyone here trying to help.
    A6712vms.1B0
    Runs fine at FSB 100 (shows as XP 1250+) and FSB 133 (shows as XP 2200+) Booting to Win. XP-sp2 on CMOS, default setup other than minor tweaks to accommodate MSI Ti 4200 AGP 8X 125mb graphics card or Gilmond PCI card. But which couldn’t run FSB 166MHz .
    No post at FSB 166 mhz (once, it did flash a post screen stating AMD Athlon XP 2800 for about 2 sec. before going blank) Manual power down. Just now I rebooted with FSB 166, and again saw the AMD Athlon XP 2800 at top of the post screen as I hit pause break,,,, then the machine power crashed to Off. Once it even loaded Win.XP-sp2 but crashed to power off with in 2 min.
    I must have booted this beast a dozen times or more after Clear CMOS jumper each time it went to blank screen with mach. still running. Start over; set Default settings and work up with FSB 100, 133, each time to read what POST says and see if it will boot to O/S.
    Btw: I have looked for clear evidence through the not so specific descriptions MSI uses to separate these board model #s and may be part of my current problem with correct board ID = correct BIOS ID.
    Found here:
    [VIA] unofficial FAQ for KT4 series motherboards
    https://forum-en.msi.com/index.php?topic=6090.0
    Eg: - KT4 Ultra (MS-6590), 6-channel hardware audio codec
       • KT4 Ultra (Pure Version)
       • KT4 Ultra-B (Bluetooth ready)
       • KT4 Ultra-BSR (Bluetooth ready + Serial ATA RAID)
       • KT4 Ultra-FISR (Bluetooth ready, Serial ATA RAID, Gigabit LAN, IEEE 1394)
    - KT4V (MS-6712), 6-channel software audio codec
       • KT4V-L, with onboard 10/100 LAN
    - KT4M (MS-6596) micro-ATX board
       • KT4M-L, with onboard 10/100 LAN
    - KT4A Ultra, (MS-6590) NEW!! based on KT400A chipset
       • KT4A Ultra (Standard Version)
       • KT4A Ultra-SR (Serial ATA RAID)
       • KT4A Ultra-FISR (Serial ATA RAID, Gigabit LAN, IEEE 1394)
    And a very similar version of this on Official MSI site. Both seem to try and make a difference between modles with on board Lan, and Blue Tooth ready.
    Take a look at my original box sticker:
    *As seen on face of MoBo: KT4V MS-6712   Ver:10A
    *The Original Box bar code label: KT4V-L (601-6712-050)
    K7, KT400. 5.1 chnl. S/W Audio, LAN, D-Bracket 2.
    Yup, it says all this on the sticker and the in the manual and verified on the MoBo. So I can only guess at all parts of the puzzle as they gave them to me which lead me to the afore mention BIOS versions I ran, or am running. The sticker points to nearly half of the boards above except for the MS-6712 and the Ver:10a which is what I based my BIOS choices on. If someone knows differently and can lead me to official Nfo with a better answer I’m all ears.
    Just saw this: Latest Official BIOS : KT4 MS-6590 (1.4), KT4V  MS-6712 (1.8 ) , KT4M MS-6596 (1.1)>>Here: [VIA] unofficial FAQ for KT4 series motherboards
    « on: 24 October 02, 05:14:04 »  But these links and all but one link in this ‘sticky’ are broken .
    So if  (1.8) is the Latest Official BIOS for KT4V MS-6712  then what is this at the Official MSI SITE ?
    KT4V
    BIOS Type
    AMI® BIOS      File Sizes
    485KB
    Version
    1.C      Update date
    Update Description   -Support Sempron L2 512K 3000+ CPU.
     BIOS Type
    AMI® BIOS      File Size
    Version
    1.B      Update date
    2004-8-9
    Update Description   Support Sempron 2200+/2300+/2400+/2500+/2600+/2800+ CPU.
        BIOS Type
    AMI® BIOS      File Sizes
    485Kb Version
    1.A      Update date
    2004-5-13
    Update Description   - System is unstable to run 3DMark03 with ATI 9600 PRO
    - Add "PS/2 Keyboard Detection" Function.
    - Support AMD 2400+ CPU.
    BIOS Type
    AMI® BIOS      File Sizes
    514KB
    Version
    1.9      Update date
    2003-7-29
    Update Description   -Fixed Adaptec 2100S SCSI card cannot be used
    -Modify CPU temperature detection
    -Support boot from Onboard LAN
    BIOS Type
    AMI® BIOS      File Sizes
    514KB
    Version
    1.8      Update date
    2003-2-12
    Update Description   -Support AMD Barton XP3000+ (FSB333) CPU
    -Add "CPU Halt Command Detection" item in the BIOS Setup
     Please also keep in mind I’m not trying to scuff anyone’s shoes…I just want to get what’s right for my board so that it and the CPU can live up to their full, "as advertised" potential and run with the other 3 big dogs under my desk.
    This seems descriptive enough and rather lengthy. Just doing my homework and reporting my findings so I won’t include actual CMOS settings unless requested. I have notes on all settings I currently use to boot with this problem.
    Other Dogs:
      ASUS A7N8X-D running Athlon XP 3000 barton core 10 (runs fsb 400..whoknew?), Mem:2.5gb Super Talent
       ASUS A7M266-D running Athlon XP 2800 (mp), Mem: 3gb Super Talent
                                                   Athlon XP 2800 (mp)
      I don’t want to boar you with the details.
    Yes, and the above have been upgraded w/new chips GFX cards and BIOS,,, NP.
      Cut to sound track: Down on the BIOS again,,, wasn’t that Creedance Clear Water Revival?
    Any suggestions or insights are much appreciated. 

    Welcome, and thank you for a thorough input! Hope I don't miss something you already said!
    Quick version: Try setting processor voltage to 1,5 volt.
    Long version:
    Posting at wrong FSB can be put down to two things:
    1 The processor isn't what you thought it is, but I think you eliminated that.
    2 The system cannot run the processor at the correct speed. Mainly this has to with overheating, and/or that memory and processor cannot cooperate. This must be it, as you noticed the processor is correctly recognised.
    You have little connected right? Then the best thing would be to pull power cable out. Take the motherboard and redo the seating of the processor, this is much better done with motherboard taken out.
    At the same time, I suggest you clear CMOS. Should always be done when changing processor, and I've noticed on the Internet there are a few 1,5V XP2800 around.
    http://64.233.183.104/search?q=cache:Q4eIa6OhudcJ:forums.amd.com/lofiversion/index.php/t47487.html+AXDL2800DLV4D+problem&hl=sv&gl=se&ct=clnk&cd=1

  • RAW conversion with Aperture

    Has anyone compared the quality of RAW conversion of Aperture vs. Nikon Capture as well as other converters?
    I really like the quality of nikon capture and would not want to purchase aperture unless the conversion was at least equivalent.
    Thanks for any input.
    mark
    G4 17" Laptop   Mac OS X (10.4.3)  

    I've compared Aperture's conversion side by side with Adobe Camera Raw's. My method was to do some conversions with Camera Raw and save the result along with the RAW file. Then, in the Apple Store, I performed the conversions using Aperture.
    The results from Aperture are not good. They look okay at reduced size, but if you look more closely, the de-mosaicing Aperture performs is quite bad. On some images it is only "somewhat" worse than Camera Raw; on others it is so bad as to be unusable. Shadow detail suffers the most, but highlights are not immune. Some images showed color fringing that was not present in the Camera Raw conversion, even with all chromatic aberration adjustments set to zero in Camera Raw.
    I ignored differences in color and tonal rendering because I did not have enough time with Aperture to learn to get the best results out of it in terms of color. It takes a while to figure out how to get good color out of a RAW converter.
    In no case was Aperture as good as Adobe Camera Raw in terms of image quality. The difference was immediately obvious at 100% magnification.
    I would not use Aperture for RAW conversion.
    EDIT: I forgot to mention, in case it matters, my camera is a Nikon D2X.

  • Import RAW files as lossy compressed DNG

    In LR4.2, I would like to be able to import my NEF files as lossy compressed DNGs. That option is not available in preferences, so I have to do the import to DNG, then invoke the "Convert to DNG" in the library menu where I can select lossy compression. Why not make that option available on import so I don't have to perform the second step?

    JimHess wrote:
    Do you really want to import compressed DNG files? They are not really raw files anymore, and they are reduced to 8 bits. That doesn't seem to be a good choice for master images. But the choice is yours, of course.
    To be clear, the default settings used for 'Copy as DNG' *does* compress raw data, but without loss (i.e. lossless, not lossy).
    I know you knew this Jim, but maybe another reader is not so clear...
    PS - I really like the new lossy compressed DNG technology - files behave like raw in terms of editing (white-balance, camera-profile, h/s recovery...), but are much smaller. As long as one realizes that the data will suffer loss, and be pared down to 8-bits (which isn't as bad as it sounds, since it does NOT use the same linear encoding scheme as raw data), then it can be a great option during import, if you don't plan on making big prints...

  • ASM on RAW or OCFS2

    We have a 2-node RAC cluster using ASM that has a couple diskgroups (DATA and FRA) on RAW devices. With our current backup methodology, we use RMAN to backup simultaneously to FRA and /u02/backup (cooked filesystem on node 1 for backups) from where netbackup picks it up and tapes them. The team is a bit concerned with the learning curve involved with RAW and also the maintenance complexities involved in db cloning etc (eg. recently we were asked to clone this RAC database to a non-RAC database on a different host).
    One thought inside the team is to do away with RAW and put ASM on OCFS2 filesystem (in which case we won't have to maintain a separate /u02/backup at all plus no learning curve to manage RAW involved). However we do acknowledge that by doing so, we won't be able to reap the benefits of RAW long-term (when the usage of our RAC instances goes up). Also, I believe Oracle suggest ASM on RAW (could be wrong but that is what I see generally people talking about).
    Any suggestions/advices for or against having ASM created on OCFS2 (or even NFS etc)?
    In case that helps, the servers are Dell PE with RHEL4 and Oracle 10.2.0.3. Our duties are well defined between the storage group, Linux group and DBAs.
    Thank you,
    - Ravi

    Dan,
    There are some things about ASM that make it easier than a FS, but there are others that are more difficult; there is definitely a tradeoff. For the DBA who is coming from a background that is light on hardware, the things that ASM does best are "black box", tasks that a sysadmin or an EMC junkie normally do. The "simple" things a normal DBA would do (copy files, list files, check sizes) are now taken through another layer (whether you go asmcmd or a query against the ASM instance, or RMAN). Kirk McGowan briefly talked about how the job role of the DBA has changed with the new technology:
    http://blogs.oracle.com/kmcgowan/2007/06/27#a12
    Let's look at two "simple" things I have come across so far that I would like to see improved. First is resolving archivelog gaps:
    Easiest way to fill gap sequence in standby archivelog with RAW ASM
    Yes, we all know dataguard can do this. But this is not a thread about dataguard (I am more than willing to talk about it in another thread or privately). With ASM on Raw (from now on, I will just say ASM and assume Raw), you have to use RMAN. I have no problem saying that all of us should become better at RMAN (truly), but it bothers me that I cannot login to my primary host and scp a group of logs from the archive destination to the archived destination on my standby host. Unless of course you put your archive destination on a cooked FS. But then we go back to the beginning of this thread.
    Another "simple" tasks is monitoring space usage. ASM has a gimped version of 'du' that could stand a lot of improvement. Of course, there is sqlplus and just run a nice hierarchy query against one of the v$asm views. But 'du -sk /u0?/oradata/*' is so much simpler than either approach.
    Which leads me to ask myself whether or not we are approaching disk monitoring from a completely wrong angle. What does the 'A' stand for in ASM? grin
    There is a lot that ASM can do. And I have no doubt that, due to my lack of experience with ASM, I am simply "not getting it" in some cases.
    "While it may seem painful in the midst of it, the best way to overcome that learning curve is to diagnose problems in a very hands-on manner." - Kirk McGowan

  • EXIF Profile Name Not Accurate for Nikon RAW

    I am shooting RAW with a Nikon D90. When I import, the EXIF metadata in Aperture 2 and Finder always says Profile Name = Adobe RGB (1998). The camera is set to sRGB, not Adobe -- Capture NX2 seems to know, why doesn't Aperture/ OS X?
    I'd be more accepting if it was blank, but I don't like that the wrong information is displayed.
    Has anyone else noticed this?

    Internally Aperture seems to use a unique color space for each different RAW type and always uses the wider gamut Adobe RGB as a default for RAW in terms of display/previews/etc. This is not an issue. Your RAW file has not been altered in any way, it's exactly as it was when it came out of the camera. This has been discussed here about 1mm times.
    When you export from Aperture you can choose any color profile that you would like for the exported version. If you are experiencing some sort of issue related to this I am sure that many here would be glad to help.
    RB

  • The use of WS-CAC-6000W power supply

    We are replacing the entire network in a refinery, using  more than 100 Cisco switches.
    I’d appreciate your help on a specific issue
    1-Condition:
    There are two supplies WS-CAC-6000W for the VS-C6513E-SUP2T
    Each supply is connected to two 220VAC power inputs
    Each 220VAC input of a power supply connects from distinct electrical panels
    2-Fail scenario:
    What happens if an electrical panel is off ?
    3-Conclusion
    The data sheet says that if single input is active, and running from 220VAC, the unit is capable of providing 2900W. The same data sheets says if there two power supplies they operate in load-sharing so, the output will be 6000w still
    4-Is that conclusion correct?
    Thanks
    -Jose

    Each 220VAC input of a power supply connects from distinct electrical panels
    I have no idea who implemented this but this is incorrect. 
    Each PS has two input feeds.  This is correct.  Both input feeds MUST go to the same power source.  Let's say you have two power feeds (like anyone else):  One is "raw" power and one is UPS power. 
    So for PS 1, for instance, both input feeds must go to "raw" power.  For PS 2, both feeds must go to UPS.  
    Your current setup means that one input feed from for each PS goes to "raw" power and the other input feed goes to UPS.  This won't work.  If you fail to UPS, the each power supply does/will NOT have enough input to power the line cards in the chassis.  

  • Computer doesn't make use of all his power...!?

    I guess that my mac pro don't uses all of his computing-power. I have a quite new mac-pro, 2x2.8GHz Quad-Core Intel Xeon with 2GB 800 Mhz memory. Before this, i had a G5 daul 2.5Ghz, but so far i couldn't notice the enhanced computing power.
    Could it be possible that something blockade the power? I use my mac pro for musicproduction with logic. Sometimes logic crashes with an overload message. But when i see the CPU-load window, only one of the eight cpu load indication is overloaded, and the other seven are not even in use.
    Does anybody know whats going on here?
    regards... alex

    This problem occures very likely due to the single-threaded
    architecture of your Logic-Version, meaning approx that this
    app's tasks/operations can only be processed by one cpu.
    Many apps (especially the older) aren't optimized (or bad-coded)
    for multi-core/-cpu-Workstations like our Mac Pros.
    Therefore they can only utilize 1/8 of the potential raw power
    they could provide.
    We're all in the same boat, as we all expect and hope our favourite
    apps to get more out of our machines aka "go multi-threaded".
    The newest logic-Version should at least take advantage of two cores.

  • Mac Pro RAID 0 setup with bootcamp

    I'm about to take the plunge and buy a Mac Pro as soon as I get the money (or credit) to. There have been things here and there that have turned me off but I found ways around some concerns, but this one I haven't been able to find.
    I am a musician and want to utilize raw power that Mac does so well with. But unfortunately my DAW of choice doesn't support OS X. I hate Vista, I never upgraded, I stayed with XP but have been willing to change to OS X with XP as well, to get the best of both worlds. Then i find my DAW doesn't support XP 64 bit so i am going to have to use Vista 64-bit (bummer).
    What I haven't been able to figure out is if it is possible to run a RAID 0 of the windows OS and all its data. The Mac Pro RAID card doesn't support any Windows OS so i am wondering if there is anyway possible to get the Windows OS in RAID, any good hardware or software that will accomplish this. Like if i get the Mac Pro RAID cad, is there a good software or hardware RAID that will allow me to install windows in a RAID 0 configuration?
    If its possible I would have no reasons to not move over, but I really need to have my RAID setup in windows since I am using latency sensitive disk streaming programs that need to stream files in the GBs
    Also could I just opt out for the Mac Pro Card and get a generic RAID card that supports both Windows and OS X? And if so any recommendations?
    Message was edited by: jmoss211

    I'm glad you are in the planning stages!
    The question I didn't know to ask is always the one that bites the worst.
    One of the best options might be the RR 4320
    http://www.hptmac.com/US/product.php?_index=50&viewtype=details
    http://www.barefeats.com/hard109.html
    http://eshop.macsales.com/item/Highpoint%20Technologies/RRAID4320/
    One of the first Mac Pro RAID and Boot Camp work?
    Games, hence the whole gaming graphics area, has and is not high on the list of features, which had been a large reason for people installing Windows on Mac.
    I didn't want to see you invest in SAS and not get performance level you want.
    StorageReview Performance SCSI/SAS vs VelociRaptor
    Here is the card Barefeats felt was the card that Apple has to beat, and points to some of the limitations of SAS on the Mac Pro. http://www.barefeats.com/hard104.html
    MacWorld SanFran in a couple weeks is when Apple traditionally would use to announce some of the new products for the next year (Snow, Mac Pro '09, new iMac, etc).
    Apple is not very upgradeable. It would cost me more to upgrade the cpu than buy new (though can get a good $$ on my system - in part because it runs the prior version of OS X 10.4 "Tiger". New systems will only run the current/latest OS, you won't be able to run Leopard if a system comes with Snow. The graphic choices and upgrades can be lame. It is almost a closed box.)
    I've talked to people that were into audio that used Fibre Channel in part due to low latency, increased queue depth (which is one area SATA has never implemented, 32 vs 256 I/O per sec).
    +Forget trying to use the Apple search to find threads, it is 'busted' if I can't find the thread mbean and I had with someone only last week on RAID5/6. I turned to Google to find this thread +
    Multi-core compiler optimization is big on Santa Intel's wish list to get into the hands of developers and vendors.
    *3 GHz Nehalem outperforms the latest Opteron by a margin as high as 80% and more.*
    Intel has apparently allowed HP and Fujitsu-Siemens to break the NDA on the Xeon 5570 processor for PR reasons as both companies have published SAP numbers on a Dual Xeon 5570. The Xeon 5570 is based on the same architecture as the Core i7. It is a 2.93 GHz quad-core CPU with 4 times a 256 KB L2-cache and one huge shared 8 MB L3.
    http://anandtech.com/weblog/showpost.aspx?i=532
    And AMD just gave Intel a new run for the money.
    AnandTech’s Johan de Gelas has taken a look at AMD and Intel quad-socket servers. The quad-socket six-core Intel Xeon X7460 at 2.66 GHz is compared to the quad-socket four-core Shanghai-based Opteron 8384 at 2.7 GHz. And what were the results of a test using VMware’s ESX Server 3.5?
    Shanghai’s 16 cores outperformed Dunnington’s 24 cores (or 48 if Hyper-Threading was used) by 6.5% or 8.8% depending on whether or not IBM or Dell’s chipset was used. But if we normalize that out per core, Shanghai outperforms by 59.8% (per core) or 63.2%. That is unbelievable!
    http://www.geek.com/articles/chips/amd-cleans-up-in-high-end-server-virtualizati on-20081223/

Maybe you are looking for