DMA Limitation ?

Hello !
I'm using PCI-DIO32HS in the synchronous burst mode. I use 533Mhz Pentium
computer
with 128MB memory.
I configured the output card so, that the actual PCLK duration is 100ns,
which means 10Mhz
data transfer. The sample size is 1000000 data points using 32bits =
4MBytes.
The result is following, every 100microsecond, there will be a
1-2microsecond delay in the ACK
line meaning that the data transfer is not continuous, the time is lost
somewhere (DMA ?).
The limit for continuous transfer that I was able to get was 3Mhz.
How it is possible to reach the 10Mz without delays in the transfer.
The PCI-DIO-32HS manual promises 19Ms/s continuous data transfer...
Thanks,
Sami Laitinen

Hello !
I'm using PCI-DIO32HS in the synchronous burst mode. I use 533Mhz Pentium
computer
with 128MB memory.
I configured the output card so, that the actual PCLK duration is 100ns,
which means 10Mhz
data transfer. The sample size is 1000000 data points using 32bits =
4MBytes.
The result is following, every 100microsecond, there will be a
1-2microsecond delay in the ACK
line meaning that the data transfer is not continuous, the time is lost
somewhere (DMA ?).
The limit for continuous transfer that I was able to get was 3Mhz.
How it is possible to reach the 10Mz without delays in the transfer.
The PCI-DIO-32HS manual promises 19Ms/s continuous data transfer...
Thanks,
Sami Laitinen

Similar Messages

  • Using NI-IMAQ imgSnap (ANSI C) with Windows 7 x64

    Hello,
    I am trying to perform some basic image acquisition using ANSI C calls to NI-IMAQ (as distributed by the November 2009 release of NI Vision). I am developing on Windows 7 x64.
    Setup seems to work fine, but when I make the following call:
       void* theTempBuffer = NULL;   Int32 theRetVal = imgSnap(mSession, &theTempBuffer);
    I get the following error:
    This 32-bit device is operating on a 64-bit OS with more than 3GB of physical memory. This configuration could allocate 64-bit memory which is unsupported by the device. To solve this problem, reduce the amount of physical memory in the system.
    I see this whether I compile to a 32-bit or 64-bit target. I would have concluded that the PCI-1422 simply isn't supported under Windows 7 x64, but the card (and attached camera) work fine for snapping or grabbing images when I go to the Measurement & Automation Explorer.
    Thanks for your help.
    Solved!
    Go to Solution.

    Hi KKRand,
    Currently you are using a non-supported combination. Due to the hardware limitations of the 1422 board, it is limited to 32-bit DMA. Due to IMAQ's C API where users have the ability to allocate their own buffers to be acquired into, we have no way to enforce that this memory is below the 32-bit boundary for *all* use cases of our C API.
    Because of this, we officially do not support using the IMAQ C API on a board with this 32-bit DMA limitation on a 64-bit system and more than 3GB of memory because we cannot guarantee that a user's application will run without any modifications. These limitations do not apply if you use an API like LabVIEW where the user does not control the memory allocation directly or you have a newer board with 64-bit DMA support.
    Measurement & Automation Explorer can get around this limitation because it uses limits its use of our API to ensure that memory is allocated properly to guarantee that it is below the 32-bit boundary if you have an affected board/system combination. With possibly a little modifications to your app, you should be able make it able to work as well, but as I mentioned it is not an officially supported combination.
    The modifications to your application are:
    - Make all allocations of image buffers that you will use for acquisition from that board be allocated by calling into imgCreateBuffer() as opposed to allocating them by other means (new, malloc, on the stack, etc). Memory allocated in this manner must be released by calling imgDisposeBuffer().
    - Disable enforcement of checking this condition by calling a private function exported from imaq.dll as "int niimaquDisable32bitPhysMemLimitEnforcement(SESSION_ID boardid)"
    The first part is likely already satisfied in your case because the high-level imgSnap() allocates the memory for you and thus ensures that it meets our requirements. You'd have to add the second to your application. Keep in mind this is unsupported. We currently do not check that all buffers in your bufferlist were allocated in this manner and so if you allocated a buffer by some other means you would get a run-time configuration failure when and if any memory ever fell past the 32-bit boundary when you start acquiring.
    This behavior might change in the future, but from a support standpoint it is unlikely that we'd officially support this configuration since it is highly dependent how the user's application is written (if it relies on its own memory allocations). What we wanted to avoid was customers starting with simple usage of our API (like a high-level Snap()) and finding it working but then later going and allocating their own buffers and seeing a failure.
    Hope this helps, 
    Eric 

  • Inconsistent arrays with multiple buffered readings

    I am attempting to use a PCI6601(device 3) and/or a AT-MIO-16E2(device 1) to take two measurements, one a buffered position(encoder) one a buffered event. These two are using the same external trigger in order to capture two data points for each event to indicate time and location the event occurred. The two arrays that I am getting back from these buffer readings are not the same size, with one of the arrays being nearly 100 readings larger than the other.
    Attempting to take both readings on the 6601 card has not worked due to DMA limitations, but a different configuration I have not thought of may circumvent this.
    What is causing the discrepancy? The encoder readings seem to be correct, while the time typic
    ally contains extra readings.
    Attachments:
    112601.vi ‏496 KB

    When doing position measurements, a signal on the selected gate transfers that value to memory upon every pulse that comes into its gate. It looks as though this is the signal connected to PFI 38. When doing event counting, the counter transfers the count value to the internal buffer when a gate signal is received. It looks as though this is the signal connected to PFI 4.
    Are PFI 4 and PFI 38 the same signal? If so, it does appear that the two arrays should be the same size.
    I suggest to set some breakpoints in your program to look at the array after every read buffer.vi to see if the correct values are there. This may give you some more insight on to what is causing the error.
    Also, are the extra time readings interleaved with correct ones or just appene
    d to the end?
    Brian

  • FPGA memory limitation

    Hi...
    I have tried to use more than 2 MB memory (using memory blocks) on FPGA. During compilation Xilinx tool chain generated error "AR #21955"....which is explained as memory limitation (in xilinx website). Can't we create memory blocks of more than 2 MB?? (or) all memory blocks size in a vi should not cross 2 MB??. I need to create arbitrary wave forms of milli/micro second precision for some minutes......so i need many no. of points to get required waveform....Is there any way to achive it....when I transfered each point from a RT vi (not on FPGA) it takes more time (to transfer data from RT vi to FPGA vi for every iteration).
    Thank you,
    Cheers,
    Ravinder

    The size of the memory blocks you can allocate in the project and use in your FPGA VI is limited by the size of the memory available on your FPGA device. This memory is in the range of 80-192 kB depending on which FPGA device you are using. Please check the data sheet or manual for your device for details.
    http://sine.ni.com/manuals/
    So unfortunately you will not be able to store the complete waveform on the FPGA device.
    Your best option is to stream the waveform from host memory to the FPGA device using DMA and generate the data from the DMA FIFO in the FPGA VI.  
    Message Edited by Christian L on 08-06-2008 06:24 PM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

  • FPGA DMA Size Allocation

    Hi all,
    My application involves grabbing images from a 3-taps, 16-bit camera using FlexRIO. The PXI controller I am using is Windows-based while the FlexRIO module that I have is a PXI-7954 + NI 1483 adapter. The size of the image I am grabbing is 2560 x 2160, U16 and the clock cycle is 100 MHz. I've been trying for over a week and up to today, I still am not able to get the image from the camera as I kept on getting the DMA Write Timeout error. Right now, the DMA size in the FPGA is set at 130k but whenever I tried to increase this further, I get a compilation error. I've tried to have the host program to grab 100k data points from the FPGA DMA at every milisecond but it seems that, I am capped at about 10-15ms. Perhaps, Windows has its own limitation...
    Attached is the program that I am using, modified from the LabVIEW shipped example. Please advice, how do move forward from here? or, is it possible to further increase the DMA buffer size up 10x higher than the current limit?
    Attachments:
    1-Tap 10-Bit Camera with Frame Trigger.zip ‏1684 KB

    Hi Shazlan
    Apologies for taking so long to reply to you.
    You are correct in saying that the latest driver is IMAQ 4.6.4 and this can be downloaded from our website if you have not done so already.
    If you have already installed the IMAX 4.6.4 driver, has this managed to resolve your issue?
    Also, have you tried to run the compilation again and obtained a report outlining the problems?
    As a side note - I have been looking into the possibility of downloading some sort of driver for the Samos camera you are using from Andorra.  While National Instruments have not created a driver for this device, Andorra do have a Software Development Kit (SDK) which they say works with LabVIEW.  You may find it useful to have this so that you no longer have to write the driver yourself.  This may then save resources on the FPGA.
    Keep me updated on your progress and I will continue to look into this issue for you.
    Regards
    Marshall B
    Applications Engineer
    National Instruments UK & Ireland

  • Finite Measure with 2 boards 6602 and 6 DMA Channel Counter Buffered - error 200141

    Hi all,
    First .. I'm a beginner with LabView and I hope to explane myself in good way cause I'm italian..
    I red all the post about the error 200141 and check the suggested solutions (also the way to ignore the error), but I want try to ask for a differet one..
    What i'm trying to do is to aquire 6 encoder, 3 on the first board PCI 6602 and 3 encoder on the second board on DMA channels.
    The encoder generate 90000  X4 = 360000 pulses x revolution and the max speed rotaion is 4 RPs.
    Cause I need to store in a bin file all the pulses from the encoders, i generate a trigger of 1.5 MHz to get all the samples at the max system speed (360000 * 4 = 1440000 pulses  x sec).
    I think to have reached the limit, and maybe is not possible do better.. actually the 2 PCI 6602 works with 1,0 MHz of trigger and the system store in 6 files 4000000 of samples during
    the finite measure of the angular position on 6 channels.
    The trigger is not yet sync between the 2 PCI cause I'm waiting for a RTSI cable to put in the PC...
    In Your opinion is possible to find a alternative way to acquire these encoders ?..
    Thanks

    I also doubt if you need to capture every single increment from each encoder.  I'll discuss this more below.
    Further, many earlier discussions suggest that counter tasks can sustain data transfers merely in the 100's of kHz, with *maybe* a possibility under special circumstances to slightly exceed 1 MHz.  Your boards have very small hardware buffers (either 1 or 2 samples worth), causing the PCI bus usage to be very frequent and therefore less efficient.
    Now, let's go back to your sample rate.   You've got encoders which suggests that you're dealing with a physical system.  Physical systems have inertia, which limits their useful bandwidth.  In my experience, it's quite unusual to care about motion artifacts beyond the 10's of kHz.  The inertia just doesn't allow anything significant to happen at that rate.
    So, if the physical bandwidth of your motion system is, say, 5 kHz, there's a rule of thumb suggesting to measure at 10x when possible.  So that'd mean 50 kHz sampling.  50 kHz x 6 channels on the PCI bus may be possible.  Multi-MHz sampling won't be.
    Can you describe the physical system a bit?
    -Kevin P.

  • Kernel ata_piix DMA issues [solved]

    this is the first time that such a message appeared (i think after a kernel update); something is wrong, cause there are about 15 such messages (different want values) each and every boot.
    Feb 3 10:47:23 localhost attempt to access beyond end of device
    Feb 3 10:47:23 localhost sdb: rw=0, want=40017915, limit=39062500
    but thats not too bad, because it doesn't even add a second to the boot time, and the performance hasn't noticeably changed.
    but then, upgrading to 2.6.20, I got this somewhat related message:
    Feb 6 21:55:19 localhost scsi2 : ata_piix
    Feb 6 21:55:19 localhost ata2.00: ATAPI, max UDMA/33
    Feb 6 21:55:19 localhost ata2.01: ATAPI, max UDMA/33
    Feb 6 21:55:19 localhost ata2.01: device is on DMA blacklist, disabling DMA
    Feb 6 21:55:19 localhost ata2.00: qc timeout (cmd 0xef)
    Feb 6 21:55:19 localhost ata2.00: failed to set xfermode (err_mask=0x4)
    Feb 6 21:55:19 localhost ata2.00: limiting speed to UDMA/25
    Feb 6 21:55:19 localhost ata2: failed to recover some devices, retrying in 5 secs
    Feb 6 21:55:19 localhost ata2.01: device is on DMA blacklist, disabling DMA
    Feb 6 21:55:19 localhost ata2.00: qc timeout (cmd 0xef)
    Feb 6 21:55:19 localhost ata2.00: failed to set xfermode (err_mask=0x4)
    Feb 6 21:55:19 localhost ata2.00: limiting speed to PIO0
    Feb 6 21:55:19 localhost ata2: failed to recover some devices, retrying in 5 secs
    Feb 6 21:55:19 localhost ata2.01: device is on DMA blacklist, disabling DMA
    Feb 6 21:55:19 localhost ata2.00: qc timeout (cmd 0xef)
    Feb 6 21:55:19 localhost ata2.00: failed to set xfermode (err_mask=0x4)
    Feb 6 21:55:19 localhost ata2.00: disabled
    Feb 6 21:55:19 localhost ata2: failed to recover some devices, retrying in 5 secs
    Feb 6 21:55:19 localhost ata2.01: device is on DMA blacklist, disabling DMA
    Feb 6 21:55:19 localhost ata2.01: failed to set xfermode (err_mask=0x40)
    Feb 6 21:55:19 localhost ata2.01: limiting speed to PIO3
    Feb 6 21:55:19 localhost ata2: failed to recover some devices, retrying in 5 secs
    Feb 6 21:55:19 localhost ata2.01: device is on DMA blacklist, disabling DMA
    Feb 6 21:55:19 localhost ata2.01: configured for PIO3
    it clearly adds a lot of time to boot up, so if DMA really isn't supported for my intel 815 chipset, then what do I tell the kernel so that it doesn't waste time  checking for DMA support?
    These two signs seem to be related somehow, I think.
    This is all from my second hdd, that doesn't get stressed much, so I wouldn't notice a slowdown if there were one.
    Last edited by vogt (2008-02-20 03:07:22)

    Hi guys,
    I havnt updated arch for a long while because last time i tried i had similar sorts of problems like above. This also happened with other distros i use with the new kernel.
    If i put a cd in the cdrom drive, the problem went away and it would boot fine (it also worked if i used to ide-legacy option), but forgetting to put a cd in the drive meant a long wait so i wanted to find a fix for it.
    So here i am and after a few months wait i thought there may be some changes and somebody may have fixed it. A simple solution i found was to simply add the word irqpoll to the append line in lilo, the slow boot goes away (well it did for me anyway) and problem fixed (or at least i dont have to worry about putting a cd in the drive before boot lol)
    So now im back and using an up to date arch once again
    Hope this helps
    Kane

  • X-series DMA -- single chunky link for finite SGL -- possible?

    ok, i guess i stumped you guys on the previous question regarding aout fifo width.  here is another one.
    can i have a ChInCh SGL program that has a single chunky link, with the done flag set?  
    i compose the SGL myself (not using the DDK chunky link classes).   I have LinkChainRing and ReuseLinkRing SGL working fine.  However, if I want to run a finite SGL with one chunky link, I hang in tCHInChDMAChannelController::start waiting for the link ready bit.
    so there appears to be an issue with starting a finite DMA SGL on a chunky link that has the Done bit set -- is that right?  Is there a way around this?
    thanks,
    --spg
    scott gillespie
    applied brain, inc.
    Solved!
    Go to Solution.

    Steven T --
    Ok, thanks for verifying that.
    >> Is there a reason why you must use one page descriptor in the chunky link?
    Not necessarily, however since I am constructing my own SGL's, I do need to know exactly what I can and can't do.  So when I see behavior like this, I first want to understand if I am doing something wrong, then determine a workaround if it is a hardware limitation.
    As I am writing a driver that supports several different clients, I need to provide a generalized interface that can handle any request.  For example, I need to know that if one of my clients requests a single byte transfer, the driver has to fail gracefully (or implement the request without using DMA), and not hang :-)
    Having you verify this limitation (if it is that) is extremely useful to me, since I can now deploy the workaround (add an extra transfer for any single link chunky, use a direct write or FIFO preload for any single byte transfer) and not continue to wonder if I have missed some other essential register setting or flag.
    Thanks again, and if you do find out anything more, let me know.
    cheers,
    spg
    scott gillespie
    applied brain, inc.

  • Number of DMA Channel on System

    I need to log 16 (my be 32 later) simultaneous channel at the same time.
    Is the limit of DMA chanels a windows system problem or just a card problem?
    I understand that there is a limited number depending on the operating system.
    So I cannot use 4 6601E cards (4 chanels each)at the same time on a PC.
    I am told my NI that I could us PXI rack with a number of 6115 cards and an embeded controller running Window but I can not see that this would resolve the DMA problem unless it was a modified window system.
    The other problem I see is that if Window 2000 or XP configers the cards to use he same chanel, Window will not let you change anything youself as all seeting are allway grayed out.
    Judging my the large number of post on DMA perha
    ps NI should write a white paper detaiing all the problems.
    Thanks in advance fr your help.
    Colin

    Hello Colin,
    Back with the ISA boards the limit of DMA channels was due to the computer itself. Now with PCI, DMA is handled by the PCI board itself. There has to be separate DMA controller hardware for each DMA channel. Our controller chip suppports up to 3 DMA channels. So I wouldn't say that this is a problem really. It's simply what the card offers, similar to the number of counters that is on the board.
    There are fewer DMA channels than counters because 1) more DMA controllers would be more expensive, 2) Most applications probably would not require DMA channels for every counter. Interrupts may be sufficient for slower acquisitions or the the counters may be used for pulse generation which also wouldn't require DMA.
    I guess I don't know enough abo
    ut your application to comment on the 6115 suggestion. The 6115 is an analog simultaneous sampling board, but they only have 2 counters each. Perhaps it was suggested that you acquire this data with analog sampling. The advantage to this is that you can use the same DMA channel for all analog channels being scanned which would probably be a good alternative for your application.
    Russell
    Applications Engineer
    National Instruments
    http://www.ni.com/support

  • DMA transfer rate for PCI-6602 counter/timer

    I'm strongly interested in raising the DMA transfer rate between the PCI-6602 counter and computer. At the moment, I've got a Pen-4 2.4GHz operating under Win98. I have to move an 80 Megaword array at an ~5 MHz speed. So far, I've been able to reach just 2 MHz. Would it be possible? What is the battleneck here - the soft- or hardware?

    Hello,
    I think the bottleneck you are seeing here is a limitation of the dma transfer capabilities that is dependant on the bus of your PC and not your 6602 card. Here is a link of a knowledgebase that you could try to use to see if that would improve your transfer rates. I still doubt you will be able to achieve approximately 5MHz.
    http://ae.natinst.com/operations/ae/public.nsf/fca7838c4500dc10862567a100753500/1b64310fae9007c086256a1d006d9bbf?OpenDocument
    Regards,
    Steven B.

  • Axi DMA correct parameters

    I'm making my design with Vivado HLs and Vivado and I'm doing some somewhat big transfers between DDR and my custom IP block and vice-versa.
    Each transfer from DDR to custom IP is of 256x256x4=262144 bytes and it happens 4 times.
    My MM2S (Memory Mapped to Stream) velocity is at 350Mbytes/s and by S2MM is at 200 Mbytes/s.
    I know I can get better velocities and I guess these slow ones are related to the parameters of the Axi DMA block.
    That's what I came here to ask you, to help me understand which should be the correct parameters since I still can't understand it from reading the logicore product guide.
    Width of buffer length n
    From what I understand this is the maximum length of the transfer in bytes like so 2^n. So in my case as 2^18=262144 shall I put 18 in here?
    Memory Map Data Width
    Data width in bits of the AXI MM2S Memory Map Read data bus. I have no idea here. My words have 32 bits and I defined the entrance stream of my block to have a length of 32 bits but what is this?
    Stream Data Width
    I guess here I should put 32 correct?
    Max Burst Size
    Burst partition granularity setting. This setting specifies the maximum size of the burst cycles on the AXI4-Memory Map side of MM2S. Valid values are 2, 4, 8,16, 32, 64, 128, and 256.
    Again, I have no idea what to put here.
    I could do a trial-and-error approach and change parameters until I find the best ones but the problem is that each re-synthesyze and re-implementation in Vivado takes a lot of time...

    Hi,
    The meaning of those parameters is rather simple, however  it may be difficult to understand from the documentation:
    Width of buffer length register - Length of internal counter / register in the DMA which stores the length of DMA operation data. It should be in your case at least 19 bits as 2^18 give you max length 262143 bytes which is lower than the requested length 262144. This parameter does not directly impact performance. It mainly impact maximal achievable frequency and has slight impact on utilized FPGA resources.
    Memory Map Data Width - This specifies data-width of AXI4 interface, data width 64 bits when connected to HP or ACP port can significantly improve throughput (when connected to 64bit AXI Stream). This parameter is independent on the AXI stream data width. With 32 bit AXI Stream the improvement will be insignificant in your case. Setting this parameter to 64 has impact on utilized FPGA resources.
    Stream Data Width - Width of AXI Stream interface. Set 32 here, if your component has 32 bit AXI Stream Interface.
    Max Burst Size - Data on AXI interface can be transferred in bursts - N words can be transferred in one transaction on AXI. Higher burst size leads to better throughput. This parameter should be set to at least 16. You can set in to 256 (Maximum allowed by AXI4), however speedup against 16 will be small. It is caused by the fact that PS AXI interfaces are AXI3 compliant, which limits the burst size to 16. Therefore the AXI interconnect must split the AXI4 burst to several AXI3 bursts.
    To improve throughput you can use:
    Higher design clock frequency.
    Scatter-Gather DMA to improve throughput by hiding latencies of direct register mode by utilization of multiple buffer descriptors.
    64 bit AXI Stream interface in your component.
                              VK

  • Ultra DMA CRC Error Count

    I have a SATA 6 drive which is being downgraded from Ultra DMA by Windows 7.  It used to handle about 130MB/s, now it's being limited to 25MB/s.  This is apparently due to the Ultra DMA CRC Error Count going to 9 from a bad connection.  I've
    replaced the connection, but cannot find any way to make Windows 7 retry the drive at the higher data rate.
    With Windows XP there was a registry key (ResetErrorCountersOnSuccess) that you could add to make it re-detect, but this doesn't appear to work with Windows 7.  In fact all the solutions I can find online only seem to work with XP.  Is there some
    way to fix this without re-initializing the drive?  I really don't relish the idea of transferring 4TB of data at 25MB/s (40+ hours!) just to reset the drive.

    Hi,
    Have you tried to manually turn DMA on through device manager?
    Please take a try with the follwoing steps:
    Open Device Manager by clicking the Start button , clicking
    Control Panel, clicking System and Security, and then, under
    System, clicking Device Manager.‌  If you're prompted for
    an administrator password or confirmation, type the password or provide confirmation.
    Double-click IDE ATA/ATAPI controllers.
    Under IDE ATA/ATAPI controllers, for each item that has the word
    Channel as part of its label, right-click the item, and then click 
    Properties.
    Click the Advanced Settings tab. Under Device Properties, select or clear the
    Enable DMA check box, and then click OK.
    More information, please see:
    Turn Direct Memory Access (DMA) on or off
    Best regards
    Michael Shao
    TechNet Community Support

  • Memory limitation on T61

    I have the following Lenovo notebook :
    Product: ThinkPad T61 8898-55G [change]
    Operating system: All [change]
    Original description: T7100(1.8GHz), 1GB RAM, 120GB 5400rpm HD, 14.1in 1024x768 LCD, Intel X3100, CDRW/DVDRW, Intel 802.11abg wireless, Bluetooth, Modem, 1Gb Ethernet, UltraNav, Secure chip, Fingerprint reader, 4c Li-Ion, WinVista Business 32
    I configured my Notebook with a Dual-boot Windowx XP Pro SP3 and Windows Vista Bussines SP1 (32 bit)
    I have 2x 512 MB PC2-5300 667 MHz DDR2 memory inside.
    I read on the Lenovo site :
     Memory Compatibility
    (**) Windows Vista supports up to 4GB maximum memory (32-bit versions of Windows Vista cannot support 4GB). Windows XP supports up to 3GB maximum memory
    I like to update my memory with
    2x 40Y7734 1 GB PC2-5300 667 MHz DDR2 
    or
    2x 40Y7735 2 GB PC2-5300 667 MHz DDR2
    I like to know if I choose for 2x 2GB , will :
    Windows XP SP3 work just fine only it works with 3GB instead of 4GB installed memory
    Window Vista SP1 (32-bit) with the 4GB (even not supported) work stable ?
    Because the prices of 1 GB and 2 GB won't change much nowadays.
    I prefer to use 2x 2GB modules, but I like to know will it work (even with the memory limitation) stable on XP or on Vista
    or it is better/safer to use 2x 1GB modules
    Can anybody help/explain me or advice me ? 
    Electronic
    Message Edited by electronic on 01-16-2009 09:51 PM

    You're better off with 2x2GB, you may decide to run a 64-bit OS someday...and you will notice a difference between 2GB and 3GB when running a 32-bit OS.
    Cheers,
    George
    In daily use: R60F, R500F, T61, T410
    Collecting dust: T60
    Enjoying retirement: A31p, T42p,
    Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
    Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

  • Report for Qty Contract and Value Contract with PO release exceeding limits

    Hi All,
    Is there a std report in SAP that the users can use to view Qty and Val Contracts that has exceed in Qty (in case of Qty Contracts) or Val (in case of Val Contracts) ?
    Thanks in advance!

    hi Duke,
    If thinking logically, then there is no report for the same..this may be because you enter the qty or value limits in the contract doc itself....So, when you make the PO for the same, and if the qty or value exceeds the system automatically provides the message..stating that the qty or value has exceeded....
    So, there is no report for the same...
    Hope it helps...
    Regards
    Priyanka.P

  • IPhone limited to 130 apps at a time! 6,200 apps available in app store

    The i phone is limited to nine app screen i notice this when i try to move on to a tenth screen and it would not let me and also when you download a app after all 9 screen are full the app dose not show up but the app store says it is installed apple needs to expand the amount of screen allowed. you then end up having to delete 2 apps download 1 app in able to make the invisible app appear you have to try this one out for your self so you get a better idea of what i mean so it looks like we are limited to 144 apps per iPhone subtract 14 apps that come default on the screen not including the 4 that are on the bar below you are left with 130 apps that can be downloaded and use at a time per i phone that ***** there are over 6,200 apps available in the app store as i type this to you and apple is limiting me to only carry 130 at a time there is somthing worng with this and i think somthing should be done!

    No acutally what that means is most people like myself have to install 30 apps just to get the iphone to do half the things it should have done out of the box. Sure some are wants but most are "need" in order for it to do the things my old palm treo 600 could do, and still there's not copy and past and no video. On top of that they have a stupid 130 limit. I'd love to hear why that is.

Maybe you are looking for

  • Search Option in OBIEE

    Hi experts, I have requirement. I need to display the Column name of tables in my first prompt. Based on selected column data will display in the report. But data is very large volum. So i want to have an serach box in report. So that user can type s

  • Mail app public folder

    Is it possible to use public folder, contact in public folder et group of contact in public folder with Mail App ? Mail App version 6.3 (1503) and OS X 10.8.3 Exchange Server 2010

  • Child table provisioning to SQL using DBAT

    I have a Multivalued attribute named 'AuthzIds' coming from Sun LDAP thru target reconciliation. I wanted to provision this value to SQL's 'MyAuthz' table (Child of MyUser table) using GTC - DBAT. USR table -> already provisioned to -> MyUser table U

  • OVM 3. network bandwidth fo VMs

    Hi, How network bandwith is allocated accros VMs and how can I change this setup to prioritize VMs ? I have a single VM Windows 7 + PV drivers running alone on a server and I can only achieve a poor 4,4MB/s, while the underlying server operates at 40

  • 404 error when downloading Oracle SQL Developer 2.1 (2.1.0.63.73) for Mac

    Greetings, I'm getting a 404 error when I try to download the Mac OS X version of Oracle SQL Developer. The download page is at http://www.oracle.com/technology/software/products/sql/index.html And it links to the Mac version at: http://download.orac