DDS Compiler (Xilinx Coregen IP) maximum achievable clock rate

Hi,
I am using labview FPGA2011 and FlexRIO 7965R (i.e. Virtex5 SX95T). I have compiled a sinusoid generater using the built in Xilinx Coregen IP named 'DDS Compiler'. The output of DDS Compiler is sent to the host VI using DMA FIFO. My SCTL runs at 346MHz, as the largest clock that can be provided is 346.666MHz. The code according to DDS Compiler data sheet (DS558) with the configuration settings that i have used, should burn at 450MHz (mentioned in cloumn 4  of Table8 on page 28). But my code gives a maximum achievable clock rate of 350.63MHz. I have attached my code and images of the compilation results. Can somebody check and tell me why am i not getting the 450MHz rate? Is there any limitation on the clock rate due to the VI scoped or DMA FIFOs?
Thanks
Attachments:
DDS Compiler.zip ‏858 KB
images.zip ‏53 KB

Hi Sandee, 
What are you planning on doing with this sine wave?  Are you using a FlexRIO Adapater Module(FAM)?  
The reason you can't get anything higher than 346.66MHz is because the clock that is being generated from is a 40 MHz clock.  Once you multiply it up that high, the accuracy of the frequency is not reliable.  
The beauty of the FlexRIO is that you are able to bring in external clocks with several of our FAMs.  These external clocks are piped directly to the FPGA.  On top of that, if you're using the 6587 FAM, for example, you're able to generate up to a 500 MHz clock on the FAM, and provide that to the FPGA.  
To do that, you first add your adapter module to the project
Right click on your fpga target -> select new FPGA Base clock - > IO Module clock 0 from the drop down.  
Then specify your frequency to be 450 or 500, whatever you want it to be. 
Then on your block diagram, you'll provide your single cycle timed loop with the IO Module Clock 0 as the source. 
I'm going to compile it now and I'll update with the results. 
National Instruments
FlexRIO & R-Series Product Support Engineer

Similar Messages

  • Verify my maximum achievable sample rate .

    I am using a PXI 6031E and an SCXI 1001 Chassis populated with six 1102c modules. I am setup to multiplex 192 channels to one DAQ channel(0). I am using every channel on every module. The 1102c is cable of 333KS/sec but using all 32 channels my max should be a little more than 10Khz. My PXI 6031E is capable of 100KS/sec but now I am limited by the module speed so the best I can get is around 10Khz. Is this correct?

    I have a problem that's kinda similar.
    We are using a PXI 6031E in a PXI 1036 housing connected to a computer with a PXI 8360 MXI card.
    I was trying to measure 17 Channels at 10k, but it just didn't work. According to the Datasheet, the max. sample rate of the 6031E is 100kHz.
    Then I figured, that maybe the whole sum of all used channels may be 100kHz.
    The maximum samplerate that works is 7262 Hz.
    But 7262 * 17 is about 123 000, which is more then 100 000. 
    Could someone could explain, whats behind that? 
    Attachments:
    highsamplerate.PNG ‏24 KB

  • FFT express VI: maximum possible clock speed

    I'm using the FFT express VI in an FPGA VI running on a PXIe-7962R.  The maximum clock speed I can use without getting timing errors on compilation is 125 MHz.  Is this about right for this VI?  I see that the Xilinx LogiCORE FFT has much faster max. clock speeds listed - > 300 MHz for most configurations.  Would I be able to achieve a significant speed increase by switching to the Xilinx IP module, or are there other timing considerations specific to LabVIEW FPGA that would slow it down?

    Hi Bob,
    125 MHz does sound about right for the FFT Express VI.  You may be able to increase that a bit by adjusting the Clock rate, Throughput, and Length parameters, but you definitely won't be able to get it up to 300 MHz as you mentioned.  Based on the Xilinx LogiCORE FFT documentation, it does look like that could be a faster solution. Have you tried compiling with that method?  What clock rates were you able to compile.  Also, what rate are you trying to reach?
    Thanks,
    Morgan Sweatt
    Applications Engineer
    National Instruments

  • IP Core Generator DDS compiler

    Hi,
    I am trying to use the DDS compiler to generate a sinusoid signal. However, i am getting errors after i try to generate the IP Core.
    The only parameters i am specifying/changing are the output width (under hardware paramterss); i am setting it to 16 bits, and i am setting the phase angle increment value to 100001 (decimal 33) so that my output frequency will be almost 50khz, i am also unticking the phase-out as i won't need it. Also, i kept the phase qidth at its default value of 16 bits. After i click on generate i get the following errors:
    ERROR:sim - Failed to generate 'sinusoidcore'. 
    ERROR:sim - "C:/Users/student/Xilinx
       projects/sinusoid16/ipcore_dir/tmp/./_cg/_dbg/./dds_compiler_v4_0/sin_cos.vhd
       " line 228: Real operand is not supported in this context.
    ERROR:sim -  Process will terminate. For technical support on this issue, please
       open a WebCase with this project attached at http://www.xilinx.com/support. 
    ERROR:sim - Failed executing Tcl generator.
    ERROR:sim - Failed to generate 'sinusoidcore'.  Failed executing Tcl generator.
    I tried opening the .vhd file in the 2nd error but it seems to be encrypted (because it is intellectual property i guess).
    Any thoughts on how to solve this? (i attached the .vhd file that the error pointed to)
    Thank You,

    Unfortunately, this is a known issue for certain specific configurations of this version of the core.
    Your options:
    - Use another version of the core
    - Generate the core in 13.4 and bring it forward.
    - Use 15 bits instead of 16

  • Problem with 'maximum achievable speed'

    Hello,
    I'm hoping you'll be able to offer me so helpful advice. 
    I'm an Orange home broadband customer, which I believe is provided through yourselves. When I signed up to Orange they stated I could expect speeds of up to 20mb. I live as close to the exchange as you could get really. On my online account it states - "The speed estimate at registration or the speed we have measured on your line is 14.0Mb"
    For the first couple of months I was regularly getting speeds of 8-10mb. To be honest I'm perfectly happy with anything over 2mb. 
    However, recently my speeds have been always on the 1mb mark, causing buffering for iPlayer, youtube etc. Having run the BT Speedtester, I find that my maximum achievable speed seems to have been capped at 1mb. What would be the reason for this? I've read my terms and conditions and have not exceeded any bandwidth levels, I don't even download songs, let alone movies.
    How do I go about getting this 'maximum achievable speed' back to how it used to be?
    Thanks!
    Download  Speed
    927 Kbps
    0 Kbps
    1000 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 927 Kbps
     For your connection, the acceptable range of speeds is 400-1000 Kbps.
     Additional Information:
     Your DSL Connection Rate :1148 Kbps(DOWN-STREAM), 1160 Kbps(UP-STREAM)
     IP Profile for your line is - 1012 Kbps

    This is a BT Retail customer forum and although there are a few 'techs' on here you will not get any specific Orangehome assistance. Contact your ISP on 07973 100150 Option 2.

  • Whats the maximum achievable speed of data transfer from RT to host PC

    Hi
        Anybody can tell me whats the maximum achievable speed of data transfer from RT to host PC, in case of both PXI and CompacRIO.
    Regards
    Visuman

    Hi visuman,
    To be honest, the dataspeed is dependent on how you architect the code, and the data communication channels that you use. 
    There are many factors that influence the maximum transfer rate, including network topology, types of interface used, OS, ambient network traffic etc. 
    You can control two things, packet size, and amount of sleep time between transmissions. 
    By altering the delay between consecutive TCP/IP transmissions and by varying the packet sizes sent from the embedded side to the host side, you can obtain a clear picture of network characteristics between the two devices. The end result is a report of the optimal TCP/IP configuration, that is packet size and sleep time.
    Check out : Developer Zone : Measuring the Maximum Amount of Data Sent Out of a Real-Time Target Device
    Here are some other links that maybe useful for you.
    KB 2M9ARPEW : Real-Time VI to Host VI Communication Methods
    Developer Zone : Real-Time FIFO for Deterministic Data Transfer Between VIs 
    Hope this helps!
    Ashish Naik
    Field Sales Engineer
    National Instruments UK

  • Xilinx Coregen does not appear in palette

    I would like to use Xilinx Coregen to generate FIFOs for my LabVIEW FPGA project. The help file says to look on the programming palette inside an FPGA VI, but I don't have one. I have the IP Integration Node and all the FPGA functions, but no Coregen. The application itself is installed on my computer. Is there an installation option that I missed? How can I get this to show up?
    Rgds,
    Nick
    Solved!
    Go to Solution.

    I'm using the Virtex 2 (PXI-7811R), so that would explain my problem.
    I tried using the IP integration node, but I couldn't get that to work either. When I generated the cores, I set the device as Virtex family, device="xc2v1000", package=fg456, speed grade=-4. These are the same values I see in the Xilinx log file after generating a bitstream. I added the core (.xco) file in IP integration node and pressed "Generate". I then got the following message:
    INFO:sim:760 - You can use the CORE Generator IP upgrade flow to upgrade the
       selected IP Fifo_Generator v4.4 to a more recent version.
    INFO:sim:760 - You can use the CORE Generator IP upgrade flow to upgrade the
       selected IP Fifo_Generator v4.4 to a more recent version.
    WARNING:encore:175 - Project options (family='virtex2', device='xc2v1000',
       package='fg456', speed grade='-4') are inconsistent, unavailable or
       incorrectly entered.
    Generated IP unsuccessfully. Fix the above error(s) or warning(s) and generate the IP again.
    Any idea what's going on here?

  • How to affect the actual FPGA clock rate?

    Hallo,
    I developed a FPGA-VI, which should run with 80 MHz clock rate on the PCI 7833R board.
    This was no problem at the beginning, but meanwhile the code got bigger (Slices: 30%). Now the compiler says that there is an error with the timing constraints. The maximum clock it can reach is 74 MHz.
    Unfortunately I need the 80 Mhz to be fast enough.
    What parts of the code influence the actual clock rate?
    How can I get it faster?

    I think you missunderstood what I mean:
    The problem is not how many ticks a while loop needs for one cycle.
    The problem is the clock rate setting for the FPGA-board. It is set to 80 MHz and not the Default 40 MHz.
    When I compile my VI the compiler reports that it can't keep the 80 MHz. That means for example that a Single Cycled Timed Loop doesn't need 12.5 ns but for instance 13.4 ns.
    If I separate my code in 2 parts and compile each part as a separate project, the compiler can keep the 80 MHz. Only together it doesn't work.
    So the whole problem has something to do with the size of the project, the depth of the structures etc.
    So I need to know for example what structures slow down the clock most, or something like that.

  • MyRIO memory, data transfer and clock rate

    Hi
    I am trying to do some computations on a previously obtained file sampled at 100Msps using myRIO module. I have some doubts regarding the same. There are mainly two doubts, one regarding data transfer and other regarding clock rate. 
    1. Currently, I access my file (size 50 MB) from my development computer hard drive in FPGA through DMA FIFO, taking one block consisting of around 5500 points at a time. I have been running the VI in emulation mode for the time being. I was able to transfer through DMA from host, but it is very slow (i can see each point being transferred!!). The timer connected in while loop in FPGA says 2 ticks for each loop, but the data transfer is taking long. There could be two reasons for this, one being that the serial cable used is the problem, the DMA happens fast but the update as seen to the user is slower, the second being that the timer is not recording the time for data trasfer. Which one could be the reason?
    If I put the file in the myRIO module, I will have to compile it each and every time, but does it behave the same way as I did before with dev PC(will the DMA transfer be faster)? And here too, do I need to put the file in the USB stick? My MAX says that there is 293 MB of primary disk free space in the module. I am not able to see this space at all. If I put my file in this memory, will the data transfer be faster? That is, can I use any static memory in the board (>50MB) to put my file? or can I use any data transfer method other than FIFO? This forum (http://forums.ni.com/t5/Academic-Hardware-Products-ELVIS/myRIO-Compile-Error/td-p/2709721/highlight/... discusses this issue, but I would like to know the speed of the transfer too. 
    2. The data in the file is sampled at 100Msps. The filter blocks inside FPGA ask to specify the FPGA clock rate and sampling rate, i created a 200MHz derived clock and mentioned the same, gave sampling rate as 100Msps, but the filter is giving zero results. Do these blocks work with derived clock rates? or is it the property of SCTL alone?
    Thanks a lot
    Arya

    Hi Sam
    Thanks for the quick reply. I will keep the terminology in mind. I am trying analyse the data file (each of the 5500 samples corresponds to a single frame of data)  by doing some intensive signal processing algorithms on each frame, then average the results and disply it.
    I tried putting the file on the RT target, both using a USB stick and using the RT target internal memory. I thought I will write back the delay time for each loop after the transfer has occured completely, to a text tile in the system. I ran the code my making an exe for both the USB stick and RT target internal memory methods; and compiling using the FPGA emulater in the dev PC VI. (A screenshot of the last method is attached, the same is used for both the other methods with minor modifications. )To my surprise, all three of them gave 13 ms as the delay. I certainly expect the transfer from RT internal memory faster than USB and the one from the dev PC to be the slowest. I will work more on the same and try to figure out why this is happening so.
    When I transferred the data file (50MB) into the RT flash memory, the MAX shows 50MB decrease in the free physical memory but only 20MB decrease in the primary disk free space. Why is this so? Could you please tell me the differences between them? I did not get any useful online resources when I searched.
    Meanwhile, the other doubt still persists, is it possible to run filter blocks with the derived clock rates? Can we specify clock rates like 200MHz and sampling rates like 100Msps in the filter configuration window? I tried, but obtained zero results.
    Thanks and regards
    Arya
    Attachments:
    Dev PC VI.PNG ‏33 KB
    FPGA VI.PNG ‏16 KB
    Delay text file.PNG ‏4 KB

  • Atm0 clock rate on 1760 router

    I have a 2651xm router with wic1-adsl card and i've seen that by tweaking the atm0 and dialer0 settings I can literally double the download speed on my crappy adsl line. One of the settings I use on the 2651 is:
    # int atm0/0
    # clock rate aal5 7000000
    however I also have a 1760 router on adsl at a different location but that won't accept 7000000.
    1760(config-if)#clock rate aal5 ?
            1000000
            1300000
            1600000
            2000000
            2600000
            3200000
            4000000
            5300000
            8000000    <default>
      <1000000-8000000>  clock rates in bits per second, choose one from above
    from the point of view of squeezing every last drop of speed from the adsl line, which clock rate should I use?

    Tony,
    Read this
    http://www.cisco.com/en/US/docs/ios-xml/ios/interface/command/ir-c2.html#wp4276769922
    Regards,
    Alex.
    Please rate useful posts.

  • Error Disabling the Clock Rate on Interface

    When i try to disable the clock rate on a Serial interface of a 3640 Router the router returns the following error:
    Router(Config-if)#no clock rate 56000
    FECPM PM doesn't support clock rate 0
    HOw can i disable the clock rate

    Hi
    I even kept the interface in admin shut mode but it refuses to accept the command no clock rate
    After using show controller interfaces serial 0/1
    it gives o/p as serail not connected
    Kindly see how to remove the command
    regards
    Sohail JKB

  • Get actual clock rate error

    I'm using ADAC 5500 DAQ card with labview. I had an error called 'get actual clock rate'.What's the meaning of this error. How can i get actual clock rate,what shoul i do?Thanks for your response.

    Hello �
    I visited IOtech�s website and found out that in the ADAC/5500 Series specifications data sheet it is mentioned that LabVIEW is supported with this piece of hardware. They mention: �complete drivers with example programs for rapid application specific development�.
    My suggestion is to contact IOtech. They are the ones who wrote the driver to interface their hardware with LabVIEW and should know what the error means. In this case, our ability to help you is limited since we did not write the driver. However, they should be able to provide you with more information about the error and how to fix it.
    ADAC/5500 Series
    IOtech support<
    /a>
    Hope this helps!
    S Vences
    Applications Engineer
    National Instruments

  • Can i do hardware compare using hsdio when the generation and acq clock rate are different

    Hi there
    my application should generate a digital stream in clock rate A with bus width (number of bits) very from 1 to 4 bits
    The DUT will response always on on bit but with clock rate B.
    1. can i use the hardware compare mechanism of the hsdio boards.
    2. if answer is YES is there any sample that can help me getting started.
    Thanks for advance
    Gabel Daniel
    Solved!
    Go to Solution.

    Hi,
    One good example to use as a starting point can be found in the NI Example Finder.  Navigate to "Hardware Input and Output" >> "Modular Instruments" >> NI-HSDIO >> Dynamic Acquisition and Generation >> Hardware Compare - Error Locations.vi.  You'll need to explicitly wire in your new rate into the "niHSDIO Configure Sample Clock.vi" on your response task.
    There is also a portion of this code that warns if the stimulus and response waveforms are not the same size.  This is not necessary, and could be deleted from the program. You are allowed to have different size stimulus and response waveforms.
    Jon S
    Applications Engineer
    National Instruments

  • Sample clock rate when exporting from Matlab

    Hi all,
    I'm quite new with Labview and got stuck with (probably) some basics. I try to use a data vector generated by Matlab as an output with PCI-6115 (see the pic). I use "read from spreadsheet" VI. Everything works fine but I cannot change the sample clock rate. I wanted to use 100 000 kHz for this (binary) output but I only get 1 Hz which is some default value. The "DAQmx timing" VI which I'm using shows "uses the dt component of the waveform input to determine the sample clock rate". I find this dt inside the "DAQmx timing" but nothing happens when changing. Any help is greatly appreciated!

    Do you see those red coercion dots on the inputs of the DAQmx Timing and DAQmx Write? They mean that you are using an incorrect data type. A 1D array has no timing information. The waveform data type includes a dt value and this is required to set the sample rate of the DAQ card. If you don't have this in the text file, you cannot automatically provide it. The Read From Spreadsheet cannot automatically create a waveform data type even if the file does contain dt information. Since you did not attach the text file you are reading, no way of knowing if your problem goes all the way back to Matlab or somewhere else.
    In the future, please think about what information you should provide. As mentioned, the text file is important. It's also hard to debug an image. Attaching the actual code or a snippet is much better. A snippet is a png file that can be imported into LabVIEW as actual code. It's an option under the Edit menu in LabVIEW 2009.

  • Scan clock external - log scan at less than the external clock rate

    I am using the Clock config vi to set the scan clock to external and connected to PFI0
    PFI 0 is connected to a TTL level encoder, which is attached to the rotating shaft of a brake test dynamometer. The encoder provides 60 pulses per full rotation of the shaft. Thus I acquire 60 scans of the channels to the buffer for every turn of the shaft. So far, so good, this works fine.
    However, I wish to use the same software on another dynamometer which is fitted with a different encoder. This encoder gives 1024 pulses per turn of the shaft. I do NOT want to acquire a scan for every pulse ( I don't want 1024 scans per turn of thae shaft), I would prefer a 16th of this, say 64 scans per rotation. Can I
    use Clock config vi to set acquisition at some fraction of the external clock rate ?
    I notice an example on your site describes setting the external channel clock to a fraction of the external clock, but I can't see anything similar for the scan clock.

    Hi,
    I would post your question to the Measurement Hardware section of the Discussion Forum. Your question would get the best response in that Forum.
    Mike

Maybe you are looking for

  • DV9700 Volume Controls not working after motherboar​d replacemen​t

    I have a HP Pavilion DV9702ea, which I've had for just over two years now. It worked absolutely fine for me in every way since I got it until about one month ago when I installed some software that required me to restart Windows. The laptop shutdown

  • UWL custom attributes are not retreived

    Hi all, We are trying to add some custom attributes to our UWL. We are able to see the new columns but they remain null. It seems that the values are not retrieved from the backend.   We have tried defining custom attributes from workflow container,

  • Solaris 10 x86 install boot error.

    Hi all. VERY new to the whole Solaris thing (Hell, I only first touched Solaris today), but I've played with Linux and soforth in the past, so I thought I'd give it a try. Well, not working awfully well. Pop CD in, wait for it to boot, everything's g

  • While in iBooks Author cursor locked

    I was editing in iBooks Author and cursor locked. Cannot open or close any app.

  • Query related to Nexus Affected by Shell Shock

    Hi Can anyone please tell us if the below Nexus hardware with the respective software (NX-OS) is affected by shell shock ? If yes then which is the fixed version of NX-OS for each ? Thanks in advance. Regards, Nasir