Supplier Rating

am probably not the best person in Purchasing and thought I’d check with you.
As part of our OPM weekly meeting here, there was an issue raised by one of the users related to Supplier Rating.
It so happens that when material is received from a supplier and its found to be defective on one or more occasion, they would like to flag the vendor with remarks. And the next time they place an order, they would like this information to be seen and check to see if there is a choice of vendors....
I remember from the past there used to be something called supplier analysis report or vendor performance report of some sort to identify this information but I may be wrong..

Hi,
To rate the performance of your suppliers, you can use specific a pre defined criteria, which your suppliers must meet, to receive a good rating for better delivery , quality etc........for more information refer the below links.
http://help.sap.com/saphelp_scm50/helpdata/en/2b/3a104202795b33e10000000a155106/content.htm.
http://help.sap.com/printdocu/core/print46c/en/data/pdf/MMISVE/MMISVE.pdf.
We can set the performance of the supplier through various criteria such as Price, quality,Delivery,service......for SAP set refer the below wiki link......
http://wiki.sdn.sap.com/wiki/display/ERPSCM/VendorEvaluationin+MM-PUR
Hope this may help you.
BR,
Patil

Similar Messages

  • W520 power supply rating vs. performanc​e question

    I have had a W520 with the i2820 processor option and the quadro 1000 video for over 3 weeks now. I bought several spare 170 watt power bricks for it, but in one location I kept the 135 watt brick from my W510.
    When I moved from the w500 to the w510, when I used a 90 watt w500 brick, the w510 would boot with a BIOS level message that said that the power supply was not adequate and that the machine would be stepped down to a slower speed. The reduction in performance was dramatic and noticable. (The w510 had the 45 watt consumption i920 chip and the more power hungry graphics)
    When I use or boot the w520 with the 135 watt supply, there is no message. My benchmarks do not change from those registered when on the 170 watt power supply, starting with the Win 7 system benchmarks for processor and video. Further, the 135 watt brick is cooler to the touch than the 170 watt one under sustained usage (3 hours plus).
    The question: given that the w520 in my configuration seems to draw less power than the hot and hungry w510, is the 135 watt supply going to work "as well as" the 170 watt one? As a briefcase lugger, any way to shed weight is valued.
    Other considerations: I have no WWAN card installed, and have the CD/DVD removed and substituted with an SSD. The main drive is also SSD, bringing down somewhat the power requirements.
    David Gleason. Every T from the T20 to the T61p, W500, W510. Current system: W520 4270-CTO i2820 8GBx4 (corsair), FHD(1920x1080), NVIDIA Quadro 1000M and 2 Samsung 830 512gb SATA III SSD.

    I remember another post, that I can't seem to find now, which refers to a software utility that displays wattage being consumed by the system.  This would be quite helpful.
    Couldn't that be used to study what is being used, and how much of a power adapter is needed to operate the system?
    I am trying to come up with a solution for vehicle usage, and I am afraid, it will have to be a 12V to 120v power inverter with a brick plugged into it.
    Another factor will be the amount of power consumed when recharging the internal 9cell battery and the external 9cell slice. Doing this while operating the system with WiFi / WWAN active will increase the demand.
    = = = = = = = = = = = = = = = = = = = = = = = =
    W520 4276-2QU i7-2720QM quad core, 4GBx1, FHD(1920x1080), NVIDIA Quadro 2000M, 500GB-7200, 6205WiFi, 720pCamera, Bluetooth, vPro, 57++9cell, 27++9cellSlice, Win7Pro64
    IBM R50 WinXP for backup.

  • Query related to supplier registration

    Hi,
    During the supplier registration process can we rate the supplier based on the questionnaire  which they have completed and submitted, means is it possible to convert the data i recieve from the supplier in the questionnaire to numberic format and then decide whether to approve or reject the supplier considering it as supplier rating.
    Please advise.
    Regards
    GGL

    it would be something like this, Try this
    1 localejbs/AF_Modules/PayloadSwapBean Local Enterprise Bean swaptxt
    2 localejbs/AF_Modules/MessageTransformBean Local Enterprise Bean trans3
    3 localejbs/AF_Modules/PayloadSwapBean Local Enterprise Bean swapxml
    4 localejbs/AF_Modules/MessageTransformBean Local Enterprise Bean trans2
    5 localejbs/AF_Modules/PayloadSwapBean Local Enterprise Bean swappdf
    6 localejbs/AF_Modules/MessageTransformBean Local Enterprise Bean trans1
    7 sap.com/com.sap.aii.adapter.mail.app/XIMailAdapterBean Local Enterprise Bean mail
    swapxml -> swap.keyName -> payload-name
    swapxml> swap.keyValue> file2
    swappdf -> swap.keyName -> payload-name
    swappdf> swap.keyValue> file1
    trans1> Transform.ContentDescription>file1
    trans1> Transform.ContentDisposition>attachment
    trans1> Transform.ContentType>application/pdf;name="file1.pdf"
    trans2>Transform.ContentDescription>file2
    trans2>Transform.ContentDisposition>attachment
    trans2> Transform.ContentType>application/xml;name="file2.xml"
    trans3> Transform.ContentDescription>file3
    trans3> Transform.ContentDisposition>attachment
    trans3> Transform.ContentType>application/txt;name="file3.txt"
    mail --> mime.contenttype   --> multipart/mixed
    I have not tried this myself. but it should work

  • Supplier dependent inspection processing

    hello all!
    we would like to implement following scenario:
    depending on supplier rating e.g. every 5th goods receipt should be checked with an inspection lot.
    dynamic modification is not an option, because we would like to set up the rules independent of materials.
    i was thinking of supplier relationship-status as an option. does anyone have experience or an idea for that?
    best regards,
    matthias

    hi
    If you activate supplier relationship in QIR,
    first 5inspection lot will be 0101 inspection type
    and then change to next inspection type 011 0r 01 like that ....
    if you want to do first5 GRinspection  and then no inspection lot creation or else reduce the frequency?
    you said you don't want DMR
    In both case you need material without that you can't
    you have to develop your won
    like system has to search for suplier if any inspection lot created across all material,
    let say supplier x supplied 10materials, out of 10 material you recived 3material ,5GR inspection and ok
    then remaining 7material you don't want to inspect ...
    I don't think it is good idea
    Thanks
    Sami

  • Overclocking and the nForce2: The Basics

    Ok, I've been meaning to write this for awhile, and I know I'm going to get some flack over it, which is good, because others' opinions give me more options to choose from. Additionally, nothing is "set in stone."
    First of all, overclocking (at this point in technology) for performance gains is RELATIVELY useless. This is truly the first generation of hardware that has truly outpaced the abilities of software authors to write code, and by the time they catch up, at least one new generation of hardware will be produced. There ARE a few exceptions to this, most notably, in the fields of video and audio processing, but even then OC'ing has only limited advantages.
    The problem is polyfold.
    First of all, most systems are "bottlenecked" at the hard drive. Using physics as the basis, your system turns physical energy into electronic, and while electricty runs close to the speed of light, hard drives definitely don't even run at the speed of sound. In today's generation of hardware, an OC'ed system would only be able to pull info from the HD at the same rate as its stock comtemporary. Caching only alleviates this problem, but doesn't even come close to solving it, especially because, in an environment of 256M - 1GB of memory most programs depend on caching anyway, through interleaving. (Interleaving is the method of creating HD extensions for RAM, allowing users to run programs bigger than their physical RAM capacities, in either single tasking (e.g. one window) or multitasking environments [e.g. more than one window.])
    Secondly, and most importantly, small changes in OC'ing are rarely, if ever, noticeable under "normal" (application) usage. A 6 to 8 Mhz change in a 333 to 400 Mhz FSB system results in a performance increase of approximately only 2 - 3%, and, (according to modern psycholgy; I hold a 2001 Bachelor's of Science in Psychology from the University of California, cum laude) humans don't really perceive changes in time unless above approximately 15% or better. Even then, the change is barely perceptible, and takes around a 25% - 33% increase to be appreciable. People CAN OC chips (e.g. Thunderbird or Barton) and achieve these throughputs in an otherwise relatively sluggish system, eliminating true processing time bottlenecks...but these don't exist in an environment consisting of a fast processor, fast memory, and fast board (the AVERAGE setup of an nForce2, with the exceptions of OC'ed T-Bird and Bartons...more on this later.) We're more likely to NOTICE improvements from better and better drivers, since they affect stability and control or influence data that comes from / goes to the hard drive and how it is processed.
    So for those of you performance-minded users, relax and enjoy the stability of stock settings (after the T-Birders and Bartoners OC their CPU first...we can wait, but before they do, perhaps they should read the rest of this post.)
    For CPU and memory OC'ers (and non OC'ers as well), it's always a good thing for the brain and memory to operate at their relative peaks, and in synchronation, even if the rest of the body is subject to "real time" constraints. Therefore, a 1:1 FSB to memory ratio is the optimal sync. With computers, if this is mismatched, you don't get the benefit of EACH AND EVERY clock cycle. Newbies, don't worry if you don't understand it, just believe it. Seasoned pros already know this. (The only real exception is when is a multiple of the other, theoretical e.g. in a 100 MHZ (system maximum) FSB and a 200 MHZ memory; there is always a memory clock cycle in the right time and place for communication with the CPU.)
    Which brings me to the discussion of current and voltage. If you OC, you need to up the voltages. Period. Now while I know that many OC'ers run stably with stock voltages, there's still reason to up the CPU and memory core voltages. The reason is current. Current (amperage), not voltage, creates heat. Most semiconductor devices are designed to draw a certain amount of POWER (Current x Voltage.) If your settings expect more from them, as OC'ing does, they will draw more POWER. In a voltage-limited environment (like your PC), your devices will attempt to create more total POWER by drawing more current. This leads to unnecessary and damaging heat. By upping your voltage (within reasonable operating range of your devices), you reduce the amount of current the system needs to run more cooly and efficiently. As long as you stay within safe tolerance, this is a good idea for the stability-minded non-OC'er as well. It's not unusual to see small temp drops after upping voltage, if nothing else has been changed.) For those of you running a 300 to 350W power supply, this could mean the difference between failure and success, because...
    Current also raises an issue which I've written about here before: the power supply. In an nForce system, I recommend nothing less than a power supply rated @ 30 amps or more on the 3.3V output. The video and memory simply cannot be denied, and if you're running an nForce2 with a GB of memory @ 2700 or better, you're asking for trouble with 28 Amps or less. Besides, it's not good to run a system that's always close to the peak maximums of the PS: bad for the supply, and thus, the stability and reliablity of the entire system. Video cards are prone to "surges." More speedy sudden color and action requires more current. Power supplies must be instantly able to keep up with this peak demand. Like a runner near the finish line, a PS cannot deliver that burst if it's already running at or close to peak maximum. My suggestion is an ANTEC True480 or better (or any similarly-rated, good quality supply) for nForce2. [A popular alternative as of late are the 1/2 size 300W redundant supplies, capable of delivering 600 watts total. You gain another advantage as well: if one of the supplies ever dies, you can stay up and limp your system to shutdown. (This is my next move, personally, and before anything else. I want rock-solid power because I process video, a time and power consuming process.)]
    Lastly, synthetic benchmarks (like 3DMark) don't tell the whole story. They cannot tell you your own satisfaction level with your results. While an excellent performance-under-load diagnostic, once run, 3DMark creates its own niche in your mind, not on your PC. Remember, a "tweaked" .25% to 10% increase is unnoticeable, but it's good to know if something was really wrong (e.g. you're only getting 1/2 the score that others are using with your same setup. It's comforting to know that your system is running the way it should, otherwise, any other minor gain would not show up in "real world" usage.)
    Don't get me wrong. OC'ing in and of itself is a cool hobby, and a good way to learn more about computers (and somtimes costly as well.) But just as it's not safe to perform on the high-wire without a net, it's not safe to attempt too much power usage without the right power. It's also near-worthless to create or exacerbate a stability problem in order to otherwise unnoticeably increase in your PC's performance.
    Be happy; be stable. Nothing is more frustrating (except children) than the old BSOD.

    Hi Clarkkent57,
    I was quite impressed by your post - a very interesting read.
    I recently Over-clocked my XP1800 Thoroughbred B (from 1537 to around 1900,) and I was gob smacked (North-Eastern England colloquial term) when I noticed no difference at all!!! (I thought I did a couple of times during a couple of laps of "Need for speed 2" - but in the end I put it down to my imagination.) I was, however visited by the dreaded BSOD on a couple of occasions - something I'm not used to (not since dumping Windows ME - anyway - LOL.)
    I used it like this for around a week, then decided to set it back to defaults. I then thought, I might notice the system Slow down, again, no noticeable difference.
    I'm now running my machine at "Stock" until I can afford a better MoBo and CPU (not to mention a half-decent graphics card.)
    Somehow, knowing that over-clocking my CPU could shorten its life - doesn't appeal to me.
    Good on ya, for a fact-laden and well-reasoned post. :D
    Axel )

  • G4 MDD Power Consumption?

    Hi there,
    Would anyone have any rough figures about how much power the G4 MDD (1GHz) uses when running in normal/idle operating conditions? That, and if possible, with an Apple 17" Studio CRT Monitor when in sleep mode?
    I ask, because I have been leaving it running for a few days as a server, so not much load on the processor, however with the many fans constantly whirring away, I'm guessing it's quite power-intensive.
    Could anyone shed any light on this please, or with rough ideas of what it might be? It would save me going out and buying a power meter
    Thanks,
    :-Joe

    The units' power supply rating does not accurately reflect actual power draw in use. The G4 minitower has a big PS because the user can install a lot of PCI cards and extra hard drives. Doesn't mean it all gets used/
    You need an in-line tester something like this:
    http://www.northerntool.com/webapp/wcs/stores/servlet/product6970_200321255200321255
    to measure the actual usage for your workflow.

  • My macbook pro suddenly won't send emails.

    They sit in my outbox. Please can you help.
    This suddenly happened this morning when I tried to send a video.  I cancelled that message bit it froze the whole thing although one message seems to have got through.

    krowl13 wrote:
    My Macbook Pro suddenly won't read my 750GB WD Elements Hard drive.
    If I'm not mistaken, this drive is bus-powered. MBPs seem to have frequent problems with bus-powered drives, probably because they don't provide enough power on the USB bus.
    In disk utility though it says it is a 2.2TB hard drive
    It is possible that the drive or USB interface are damaged. However, it is too early to decide. You need to connect the drive to a desktop machine (Mac or PC), or provide more power to the drive (with a power supply, if the drive accepts it, or a USB Y-cable and a USB power supply, rated 1A or more).

  • All should read, HOW TO CORRECT SCP on X-FI Fatal

    OK, I'll start of by saying I've been having problems with my X-FI Sound Card. SCP especially in Battlefield 2 during an artillery barrage and problems occurred in Rome Total War when the calvary charge was occurring. I spent many months trying to resolve my problem, but in the end I didn't succeed. I even dished out $2500 on a new PC system. I used to have an AMD FX-57 CPU, Gigabyte N-Force 4 Ultra motherboard, X-FI Fatalty sound card, 600 Watt power supply, Corsair DDR 400 (2Gig), and a Geforce 7900 GTX.
    The system I just purchased is an ASUS P965 Chipset motherboard, Intel Core 2 Duo Extreme processor, new power supply rated at 600 Watts, DDR2 Memory at 800MHZ using 4-4-4-2 timings, I used the same video card and sound card (Geforce 7900GTX and X-FI Fatalty). Even after purchasing a new system: believe it or not, the same SCP problem occurred in the same games. Note: The new Intel Chipset P965, CPU, power supply, and memory DID NOT CORRECT the SCP problem.
    This is how I corrected the problem so far. First, I lowered my speaker volume on my Logitech Z5500 speakers to 40%. Next I went into the Game Console of the X-FI. My adjustments are as follows:
    CMSS-3D is disabled
    24-bit Crystalizer is enabled and set at 00%
    EQ (Equalizer settings from left to right)
    0, 6, 6, 3., .3, 2.8, 4.4, ., -2, -2. (Note: 8k and 6k eq settings may cause the SCP, so set them both to -2)
    Mixer: Both wave and midi synth are at 80%, all others are disabled
    EAX is enabled and set at 2db or 00%
    Speakers are set at 5.
    Bass Boost is disabled
    Master volume is 40%
    A few notes:
    If volume panel for X-FI starts up (the icon near the system clock)
    be sure to right click the icon and select "select audio device" then select X-FI from the menu. I think "Windows Default" is the default value so change it.
    I think the EQ (Equalizer) settings really helped out.
    Let me know if SCP is cured by setting both 8k and 6k to -2.
    Please keep me informed if this helped anybody.
    Thanks.

    What you just did is to lower the SCP sound level by lowering the medium/high-end of the audio spectrum. It might be acceptable for you, but I don't think enybody else would accept to listen music ao even play games with the sounds over 8kHz attenuated to the maximum. That defeats the purpouse of buying the soundcard - you would be better with on-board.
    Try to cancel the Crystalizer (that you have pushed now to 00% and is no use in games anyway) and bring the EQ back to center position (or better - don't use it at all).
    About the "windows default" issue - did you disable from BIOS and Windows the on-board sound? It is te first thing to do when you add a Creative soundcard.
    Message Edited by SoNic2367 on 09-3-2006 06:27 AM

  • Belkin UPS on Windows 7 System

    I have a nice Windows XP/Pro machine configured to meet my needs (quad core processor, 4GB RAM, to video adapters driving three LCD displays). I'm ready to upgrade from XP/Pro to Win7/Pro.  The system is supported by a Belkin F6C1500-TW-RK Uninterrupted Power Supply rated at 1500VA. The UPS is now monitored by the Belkin Bulldog Plus monitor software on the XP machine.
    My problem is that the monitor software was never released for Windows 7, and that appears to be my only roadblock to upgrading from XP/Pro to Win7/Pro.
    Belkin's response: "We regret to inform you that due to recent events Belkin has consolidated its operations and has decided to exit from the UPS business. The drivers for the Win 7 operating system is not available and also unfortunately there will be no drivers updated for Win 7 operating system."
    Since Windows 7 has an XP mode (which is supported by my Q6600 processor), can I run the Bulldog monitoring software under it?

    Here How you do this thing with Belkin 4.0.2    The software runs script (bat) file at shutdown.  Get old clunker and load XP32 and Bulldog software.  Mine is remote accessable from a radmin server set up so use laptop to log on
    and access.  No monitor or keys. Just an old XP box with Bulldog loaded and USB cable to Belkin UPS ( mine the 1200VA that like a brick) and network connecting.  Set up XP box as telnet client. Set up The NEW W7 64 or 32 (whatever) as a telnet server. 
    Power fails XP box goes into shutdown but runs bat file before it does that calls up a VB script that runs up the telnet login for W7 box and has commands to shut it down.
    You can also use the same procedure to shut down a NASlite server as NASlite has telnet Server built in to its Linux setup.
    Essentially with a little work you can over the network shut down later op system machines and Naslite servers over the network using Telnet clients a batch file and VBS scripts for each machine to be shutdown.  Remeber that Network switch, Router etc
    need to be on a small UPS so network avaliable when power fails.
    This means only one box (XP with belkin loaded) ever needs to be monitoring power and all other machines can just be power connected to their own UPS without talking Via network or USB as one box sends commands using telnet function to shutdown all when
    power fails.
    Hope this helps

  • My Macbook Pro suddenly won't read my hard drive.

    My Macbook Pro suddenly won't read my 750GB WD Elements Hard drive.
    It was working earlier today but when I went to press play on my TV show in VLC it said the files couldn't be read. So i reopended them all and they still didn't work. I closed down all programs running that could possibly be using my hard drive but it said it was being used so I force ejected it and when I plugged it back in it didn't work.
    It comes up with an error message "this disk can't be read by this computer" but it still shows up in disk utility. In disk utility though it says it is a 2.2TB hard drive which confuses me. also I the buttons for "verify disk" and "repair disk" are greyed out.
    Is there something that I can do to try and get my hard drive working again?

    krowl13 wrote:
    My Macbook Pro suddenly won't read my 750GB WD Elements Hard drive.
    If I'm not mistaken, this drive is bus-powered. MBPs seem to have frequent problems with bus-powered drives, probably because they don't provide enough power on the USB bus.
    In disk utility though it says it is a 2.2TB hard drive
    It is possible that the drive or USB interface are damaged. However, it is too early to decide. You need to connect the drive to a desktop machine (Mac or PC), or provide more power to the drive (with a power supply, if the drive accepts it, or a USB Y-cable and a USB power supply, rated 1A or more).

  • Will X1650XT 512MB AGP Video Card work with 865PE MS-6728?

    Hi,
    My video card is fried, so I'm thinking of upgrading to a HIS X1650XT AGP 512MB video card. I read somewhere that some motherboards have issues with 512MB video cards. Is this true? I can't find anymore info on whether this motherboard will support this video card.
    http://www.hisdigital.com/html/product_ov.php?id=276&view=yes
    Anyone have any input?
    Thanks!

    No need to get hostile, I didn't say you were an idiot.  Though continuing to make uninformed statements might get you there....
    Quote
    The video cards need at least 30A to make the card run properly.  That's pretty damn easy to understand how the last PSU was the problem.
    Actually, it doesn't.  The 30A figure cited by the manufacturer is a worse case scenario for +12V needs of the entire system + the graphics card.  The X1650XT doesn't even need an auxillary PCI Express power connector and can be powered entirely through the PCI Express slot, which supports a maximum of 75W.
    If you need further proof of this, for CrossFire setups the manufacturer cites a figure of 38A on the +12V rail.  If one card requires 30A, how could two cards require only 8A (96W) more than one?  The same requirements are stated for X1950XT (30A single and 38A Crossfire), so its a highly generic target that attempts to account for an entire range of power supply rating methods and total system power demands.
    Here are total power consumption measurements of whole systems used in various X1650XT tests:
    Intel Core 2 Duo Extreme X6800 @ 2.93 GHz
    Corsair TWIN2X2048-8500C4
    Western Digital 150GB 10K Raptor
    Sound Blaster Audigy 2 Value
    AMD ATI Radeon X1650 XT
    Total system power under 3D load = 206 Watts MAX
    AMD Athlon 64 X2 4800+
    2GB Crucial PC-4000
    Western Digital 74GB 10K Raptor
    AMD ATI Radeon X1650 XT
    Total system power under 3D load = 192 Watts MAX (only 66W higher for 2 x X1650XT in CrossFire)
    Intel Core 2 Duo Extreme X6800 (2.93GHz/4MB)
    Seagate 7200.7 160GB SATA
    Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
    AMD ATI Radeon X1650 XT
    Total system power under 3D load = 182 Watts MAX
    Those are figures for TOTAL system power needs, not the graphics card itself.  Here are figures just for the X1650XT from X-bit Labs:
    55.2 Watts @ full load
    That's not more than 5A MAX from the +12V rail.  Even assuming a 20% VRM loss, gets us to around 6A MAX.
    As anyone can plainly see, the manufacturer's stated requirement of 30A is a hugely overinflated number to help people avoid purchasing a power supply like this because they are looking only at hugely inflated total power.
    Given that your X1650XT could not possibly require more than 6A of 12V+ power, and high-end systems using overclocked Dual Core Extreme don't even consume more than around 250W TOTAL (all components combined, factoring VRM loss), the evidence is now overwhelming and beyond dispute that your previous power supply was more than adequate and that inadequate power was never the cause of your problem.  It could have been defective, but that isn't the same as having inadequate power.
    Questions?

  • I can't find an ac power adaptor for my AirPort Extreme base station.

    I was given an AirPort Extreme base station, the square model, I just need an ac power adaptor. I can't seem to find one that will definitely fit. Help please?

    It is easier and better to give the model number from the base.. A1143 is the oldest Airport Extreme that is N wireless.
    http://en.wikipedia.org/wiki/AirPort_Extreme
    All the models from Gen1 to Gen5 use the same power supply.. 12v 1.8A
    But it can be replaced with any standard 12v supply.. of sufficient current rating.
    The real thing is expensive. It still doesn't have the Apple cord.
    http://www.ebay.com.au/itm/New-Genuine-A1202-12V-1-8A-Power-Supply-AC-ADAPTER-fo r-Apple-Airport-Base-/290923848575?pt=US_Power_Supplies&hash=item43bc69a37f&_uhb =1
    Buy a power supply rated at 12v 3a 4a or 5a .. any of those ratings will work well.
    I generally buy 4a or 5a. as they tend to be a bit better. Although if you check they all seem identical to each other..
    Check in your local market if you want one fast.. just do ebay search for 12v 4a power supply..
    http://www.ebay.com/itm/I-MAG-Electronics-IM120EU-400D-Genuine-OEM-12V-4A-AC-Pow er-Supply-Adapter-/380641968538?pt=Laptop_Adapters_Chargers&hash=item58a007819a
    Power cord is probably a standard IEC computer cable.
    Cheaper from HK but if you want reasonable quality pay around $10US.. cheaper can be rubbish.

  • K7N2-L & Corsair XMS2700cas2: no post in dual DDR

    Hi,
    I have a small problem here, I am assembling a rig for a friend, same hardware (CPU 2400+, ram XMS Corsair, vga ATI 9x00Pro, etc) as mine.
    While mine never had any problem, this second rig (512Mb of Corsair XMS PC2700cas2, 2 sticks 256Mb each) is driving me nut because of memory: no way to use both DDR channels.
    I tried to use only one stick, runs fine in any slot: DIMM 1, 2 & 3. I tried BOTH sticks in each DIMM, everytime runs OK.
    Two sticks run OK only when placed on same channel (DIMM 2 & 3).
    NO WAY to have them running in DIMM 1 & 2 or 1 & 3. Swapped sticks between them when making trials, no differences.
    I can change bios setting from SPD to Auto or Manual (and different timings too), no difference, still no post with dual channel.
    I tried to change memory: I have two sticks of XMS Corsair PC2400cas2, it works fine even in double channel and I can place them in ANY combination I want.
    If i try to run the XMS PC2700cas2 sticks on another mobo they work fine..
    I have a 256Mb stick of A-Data PC2700 cas2.5 too, and I tried that plus a XMS PC2700cas2 in Dual DDR, they wok fine! Normally they tell you NOT to mix brands to avoid problems, in this case mixing brands worked fine.
    What is even more interesting is that I CAN USE THE XMS PC2700cas2 IN DUAL CHANNEL IF I ADD THE THIRD STICK OF A-DATA IN DIMM2 (XMS PC2700 in DIMM 1&3, a combination that NEVER worked, plus A-Data in DIMM 2).
    My friend of course refuses to mix ram like that or to run the rig with 768Mb (apart from the fact he doesn't want, we can't, as this PC is a gaming rig he's using Windows 98SE as OS, hence the 512Mb limit...)
    Another funny strange symptom: on bios I see the rows only listed as single row, for instance, DIMM 2 that should be listed as populated row 2&3 only shows row 2, ram count shows correct amount though, both in POST and bios page.
    Using original bios v.3.2 the original Vdimm is show as 2,6v, if I try to overvolt a little to 2,7v, hoping it can help for two channel configuration it can't save the value, after rebooting is again at 2,6v.
    I know this is a known issue of MSI bios, any workaround, apart from flashing one of the even more buggy newer bioses?
    thanks
    zip

    ok first off wut board is this a delta-l or a original k7n2-l board
    second wut is the power supply rated at and wut are the numbers off the side i am running the exact same ram and mine are in slots 1 and 3 and work just fine if you can put em in lots 2 and 3 and it boots then it is in dual channel mode as the 1 and 2 are one channel and the 3 slot is the seond channel on the board
    it could be a underpower problem as these ram chips due use a lot of voltage
    and as far as win98 you can run more then 512 mb in it u just need too make sure u use the 384mb registry tweak so it will use the whole amount of ram
    answer these and it may be easier too help ya

  • Need Help with graphics card

    Today I was told to buy the graphics card GeForce 9800GT for my compaq SR5000. Whin in turn I needed a bigger power supply.  Long story short I am having a problem when I install the video card my computer makes this really loud squeal. Nothing I do is right but I know that it is installed the right way.  Do I have the wrong graphics card. Please anyone can you help me.

    Unfortuately, Compaq does not post the Power Supply rating on their website for your unit, but when you open the case, their should be a sticker on the power supply which lists the maximum Wattage for the unit.  An Nvidia 9800 GT requires at least a 400 Watt Power Supply, so if it is anything less than that, you will have problems.
    Other suggestions;
    The card should be inserted in the PCI Express slot.  Be sure it is fully seated in the slot.  If necessary, start the computer and enter the system BIOS settings.  You may need to disable the onboard video chipset before the system will "see" your new video card.
    As for the noise, make sure you haven't nudged any cables so that they are hitting system case or processor cooling fans.
    Good luck!

  • Big problems help plz

    Since upgrading to a MSI K8N Neo Platinum nForce3 250 (Socket 754)  and a A64 3000..the rest of my system is in sig
    i didnt think it was running that fast..and when i did a benchmark with 3d mark 01 it showed up with 17700...and that was with the gfx card overclcoked
    i tried messing about with memory timings and overclocking the rig a bit but was still far from impressed....anyway
    i bought a new heatsink/fan and a raptor grive to use for my O/S after i had put it all together and switched on the comp it just didnt boot ....all i got was one longish beep the a pause followed by another and so on .
    i tried reseating the cpu and heatsink tried the memory in different slots checked gfx card etc still nothing...
    anyway i went back to my n/force 2 and barton 2500@3200 with the same memory and gfx card i used on the A64 and it works fine....i benchmarked it with 3d mark 01 and got a score of about 16700...without the gfx card overclocked.....
    so have i got a borked mobo or borked chip the new system should be a lot faster than a barton 2500..can anyone tell me what the beep means off this mobo....
    cheers in advance for any replies Bram

    You can find BIOS beep codes at: http://bioscentral.com/beepcodes/awardbeep.htm
    It's not clear to me whether any of those apply to you...are you having a repeating long beep or is it long then short?
    Does your CPU fan appear to be running at its full speed? Did you make sure to clean off the old thermal compound fully? Is the new thermal compound applied properly? (don't use too much!)
    Have you tried clearing the CMOS? Unplug, and remove the battery for a minute or two. Try also to startup with nothing connected but the CPU, graphics card, and 1 stick of RAM. Before anyone else says it, as it seems to be the answer to end all answers for any problems in this forum, what is your power supply rated at? Not just watts but each rail...for example my PSU: 3.3v 30A 5v 38a 12V1 18A 12v2 15A
    Finally, you're not going to see much of an increase in 3dMark tests just because you're on a faster CPU. It's almost completely based off your graphics card. I have a 9800PRO that tests at about 2% faster going from 200mhz to 220mhz for example on my A64 3200+.
    Just some general ideas to try based on my recent experiences...in the last month or so I've built 3 PCs, 2 for relatives and 1 for myself. I think about every problem you can have I encountered but it was a great learning experience and a lot of fun!

Maybe you are looking for

  • Safari constantly crashes at startup, itunes bounces won't open

    Don't know if problems are related, hopeing someone can decipher crash log and offer some tips. Have already done archived and clean install but nothing works for Itunes. Date/Time: 2007-07-30 10:56:30.569 -0400 OS Version: 10.4.10 (Build 8R218) Repo

  • Can the position and szing of a control element in a View be controlled from the ViewModel?

    Is it possible to change the position of a control, say a label, as a response to some action/operation from the ViewModel?  Like if one response to something is true then place the label in the top left corner.  If false, place the label in the lowe

  • RWI 00236 error while opening report in PDF mode

    Hi All, I'm getting the RWI 00236 error while the report is opened in PDF mode.I have searched on this error but couldn't got the answer so far.My report is quite a large one and i wont get any error while running simple reps.If this is something to

  • Can't receive calls on iphone 6

    Why can't I receive calls on my iphone 6? I can make calls, but when someone tries to call me, they can't connect - busy signal or call failed message

  • Having trouble moving files from iWeb to iTunes

    I recently added a podcast page to my website www.soapboxhavoc.com and would like to submit it to iTunes. All goes well until iTunes asks for the file location. I put in the page url but it didn't work. Any ideas would be appreciated.