NI5112 PFI Reference Clock Input Problem

I'm writing C code to use an NI5112 O-Scope card as a digitizer and I would like to synchronize it with an HP 8648C signal generator (which has the high-stability timebase option) through PFI1. I want to use the signal generator for the reference clock as the NI5112 samples about 40 Hz high at 5 MSamples/s and because I need the two to drift together. When I use the signal generator as reference clock the result is a very "noisy", i.e. with an input sine wave the FFT is noisy, and I the timing is too bad for processing input GPS signals (which have stringent timing requirements). If I use an Agilent 33120A function generator as the reference clock, I get better results, but GPS processing is still compromised. When not using a
n external reference everything looks clean and works fine except for 40 Hz error in the sample rate. The reference outputs from both the 8648C and 33120A work fine with each other and look good and clean with the o-scope, so I can't figure what the problem is. Maybe the card is just extremely picky about external reference clocks. The function call I'm using to initialize this is:
handleErr(niScope_ConfigureClock(vi, NISCOPE_VAL_PFI_1,
NISCOPE_VAL_NO_SOURCE,
NISCOPE_VAL_NO_SOURCE,
NISCOPE_VAL_FALSE));
The program runs normally with no errors, it's just that the results are bad. Any help would be appreciated.

Hello,
If the data that you are acquiring is very noisy, I would start getting suspicious about the input impedance (1MOhm or 50Ohm). Does the impedance of the signal generator match the scope's? How does your signal look in the Scope Soft fron Panel? Does it make a difference if you use the internal clock?
Another thing that I would consider is the following parameter from the niScope_ConfigureClock function: NISCOPE_ATTR_CLOCK_SYNC_PULSE_SOURCE. In order to synchronize your arb to the scope, not only they need to share the same clock (10MHz), but also the slave receives this sync pulse from the master. In you application, if the signal generator has the ability to send a sync pulse, you can route that signal through PFI2 to ensure synchronization.
Hope this helps. Good luck with your application!
Annette Perez

Similar Messages

  • Using an external reference clock for counter measuremen​ts

    Hi all,
    I have several PXI chassis
    configured for testing, each of which uses an NI-DAQ-6221 for
    conducting period and frequency measurements among other things.
    However each of my setups give me a slightly different reading when conducting identical measurements due to
    the tolerance of the DAQ on board reference
    clocks.
    The chassis' have a 10MHz GPS calibrated reference fed into them via the reference input and this is connected to the backplane. I have tried and failed to try to route and use this as my reference
    base for my counter measurements so that each of my cards will give me
    the same reading every time. I have tried the routing method using the
    DAQ connect terminals vi but could not get this to work. I have looked through the labview examples and cant find anything thta shows me what i need to do to take frequency and period measurments using my external reference as my base clock. There are some examples showing the use of the DAQmx timing vi set to sample clock mode which allows a clock source to be selected. However these examples are only for counting digital events and when i apply the same vi to a frequency or period measurement an error is thrown saying 'invalid timing type for this channel', 'you can select: implicit, on demand'.
    I do not
    understand the working of the DAQ hardware very well so was hoping
    someone could help explain to me how to do this if it is even possible
    or give me any guidance as to how else to overcome my problem. Any vis
    showing how to use my external 10MHz reference
    as the base clock for these kind of measurements would be greatly appreciated.
    Regards,
    Dan

    Hi Dan,
    Thanks for the reply. When trying to use the connect terminals i just tried connecting the PXI10 to the onboard clock in the hope that this would configure it to use the PXI10 signal to synchronise the onboard clocks via the PLL. Doing this didn't throw any errors as i recall, it just gave me a reading of infinity as it couldn't measure the signal.
    I have actually just managed to successfully do what i wanted thanks to help from Rob at NI tech support UK. There is a DAQmx timing property node which allows you to choose a reference clock source. By using the PXI_Clk10 signal this makes the DAQ synchronise its reference clocks to this signal. I have included an image of the code i used to achieve this to take period measurements.
    Here is a very useful link which Rob sent me which may be of help to anyone else with similar problems
    http://zone.ni.com/devzone/cda/tut/p/id/3615
    Best regards,
    Dan.
    Attachments:
    ExtRef Source Image.jpg ‏105 KB

  • How to make PXI-6733 use PXI_Clk10 as the Reference Clock?

    I'm using PXI-6733 and it is put in the PXIe-1065 Chassis. As you know, the PXIe-1065 has a clock PXI_Clk10 and distributes it to all its slots. Now I want to use the PXI_Clk10 as the reference clock for PXI-6733, meaning I want to lock the PXI-6733 onboard clock to the Chassis clock PXI_Clk10.
    The question is that I can't select the PXI_Clk10 as a Ref.Clock. It is not in the list.And I have filter the channel names. Have you ever encounter this problem? Can PXI-6733 use PXI_Clk10?

    10MHz is routed via RTSI line. Unfortunately, 6733 doesn't have one.

  • 5600 Downconverter : 10 MHz Reference CLOCK

    Hi RFSA Guys,
    Is it possible to vary the  10 MHz Reference Clock and use  it ... for demodulation of RF Signals
    to a new value ... lets say 9.9992 MHz ?
    If this is possible, upto what resolution can I vary the clock..
    Can I vary it terms of 0.1 Hz or 0.01 Hz.. Is the  10 MHz Reference Clock truly that stable ?
    I have a competitor clock which is stable upto  0.01 Hz... but not in PXI Form.
    If this is not possible, Can anyone suggest a PXI Version of a Variable Stable External Reference Clock ?
    Basically, I am looking at varying the Reference Clock to vary my Symbol rate for a Device Under Test.
    Thanks for your help..
    Dharmendra

    Dharmendra,
    You might also run into problems when the downconverter tries to phase-lock to that reference clock, since it's expecting 10 MHz. Can you tell us a bit more about what you're trying to do? There might be a different way to accomplish it. Thanks!
    Chad B. » National Instruments » ni.com

  • How to get 10 MHz reference clock out of PCI-5922

    Hi,
    I am evaluating PCI-5922 for the my application.
    In my application, PCI-5922 will get signal from agilent 33250 function generator and both instruments will be triggered at the same time with the same trigger signal.
    Is there any way to take out reference clock (10 MHz) out of PCI-5922? If that is possible, synchronizing can be easier.
    I am expecting to have answers from experts!
    Zeehoon
    Solved!
    Go to Solution.

    Hi Zeehoon,
    There are two steps to export the 10MHz reference clock out of the PCI-5922.  
    1)Specify the reference clock source (to use the internal reference select "no source")
    2)Specify the reference clock output (a list of valid terminals can be found in the PCI-5922 Routing Matrix inside the High Speed Digitizers Help)
    Here is a LabVIEW snippet showing the clock exported to PFI 1:
    -Jennifer O.
    Message Edited by Jennifer O on 09-16-2009 04:00 PM
    Attachments:
    ExportReference.JPG ‏11 KB

  • Quadrature encoder as Clock input?...

    Greetings!
    Can anyone give me some advice on using a quadrature encoder as a clock input to an m-series daq?
    -Thus accepting A B and Z inputs..
    My aplication calls for measuring an Analog input on every rising edge (from the Clock generated) , now I've been scanning up and down the forum -found some posts, mostly concerned with the E-series.
    Some pointers will be appreciated!

    Hmmm, not sure how to comment.  When I think of "jitter", I think of an undesired variation in time intervals.  I wouldn't think that the digital filtering features would be a very natural way to address jitter.
    It may be a moot point though.  Digital filtering can be configured for "PFI" pins on the M-series board, but those pins can't be used for correlated DIO -- only for software-timed static DIO.
    I haven't found the need to use digital filtering in quite a while.  Need or lack thereof pretty much depends on the type of sensor signals you have and the kind of electrical environment you're in. 
    -Kevin P.

  • DDR3 sys_clk & reference clock

    Hi,
    I want to use one DDR3 MIG with 800Mhz.
    Can I use internal clock as MIG sys_clk and ref_clk instead of external clock source?
    For example, the input clock is 100MHz. I use MMCM generate 200Mhz internal clock as DDR3MIG sys_clk and ref_clk?
    Thanks
     

    Hi,
    Check this thread http://forums.xilinx.com/t5/MIG-Memory-Interface-Generator/Regrading-system-clock-generated-with-no-buffer-option-for-DDR3/td-p/464398
    The reference clock can be driven internally but system clock should come from CCIO pins.
    Thanks,
    Deepika.

  • GPS NTP reference clock

    Hej,
    I do have a problem on my OSX 10.6.6 Server - I cannot successfully connect a GPS receiver for my NTP service.
    What I did works perfectly on OSX 10.6.6 client, but on the server it fails with
    fcntl(F_SETOWN) fails for clock I/O: Inappropriate ioctl for device
    The device is a HOLUX GM-210, connected via USB (PL2303 driver from Prolific)
    works fine with cu, but not with the bundled ntpd.
    I looked around, but only found one with the exact same problem back in 2004, but his question was never answered.
    So what might be the reason for this behavior, and how to fix it?

    MrHoffman wrote:
    To clarify my understanding of the current configuration: setting the 127.127.20.0 driver doesn't get accepted via Server Admin, or that reference clock driver value is accepted but no synchronization happens? (I'd tend to expect it to be accepted, but that you're not getting a lock?)
    Yes, serveradmin accepts that happily, but it does not show up as a valid peer.
    if you're messing with the ntp configuration files (and adding stuff), try putting your additional lines after a # line or two. (The comment characters will reportedly keep Server Admin from stepping on your changes. This is one of the differences between Mac OS X and Linux distributions.)
    Thanks for this valuable tip.
    What does +ntpq -p+ show?
    schneekoenig:~ oskar$ ntpq -p
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    *time.euro.apple 17.72.133.54 2 u 772 68m 377 56.646 9.836 0.607
    so, basically, the 127.127.20.0 does not show up.
    Given the pl2303 stuff usually instantiates itself as a USB TTY device (/dev/ttyUSBxyz something or other? I run with another series of USB-to-serial widgets, so I will not see the same device names here as you'll get), and you might (will?) need to add a link to instantiate an appropriately-named a gps device for the NTP software to find. (Get a full disk backup first!) Something akin to this +ln -s /dev/whatever /dev/gps0+ command, given that the ntp daemon expects gps devices such as the /dev/gps0, gps1 or gps2 devices. Given you're using 127.127.20.0, the target device ntp will look for here is /dev/gps0, while 127.127.20.1 would seek /dev/gps1, etc. (Again, please make a full disk external backup before you go working with links in /dev given mistakes here can be deadly to OS stability. I'd suggest rebooting immediately after adding the link into /dev, too, to ensure ntp and the rest of the box reboots with happy-bits and not sad-bits.)
    done all that before asking here
    Once you reboot, try that +ntpq -p+ command, and see if you get non-zero values associated with your selected time base.
    I'd guess you might have to play with the mode setting, too. (I'd likely try getting this to work with a one-line server command first, no mode, no other options, etc.)
    I guess the problem is more likely in the way OSX Server handles the usbserial device. The exact same configuration works just fine on SnowLeopard 10.6.6 Client. Hmm, the client runs a 32 bit kernel, whereas the Server is in native 64 bit. Maybe the Prolific driver is not complete, in that fcntl(F_SETOWN) does not return the expected results in 64 bit mode. As stated before, ntpd finds the correct serial port, opens it successfully but then fails on the fcntl system call.
    Jan 20 09:45:36 schneekoenig org.ntp.ntpd[25567]: refclock_setup fd 5 modem status: 0x6Jan 20 09:45:36 schneekoenig org.ntp.ntpd[25567]: refclock_ioctl: fd 5 flags 0x1
    Jan 20 09:45:36 schneekoenig org.ntp.ntpd[25567]: addto_syslog: fcntl(F_SETOWN) fails for clock I/O: Inappropriate ioctl for device
    Jan 20 09:45:36 schneekoenig org.ntp.ntpd[25567]: addto_syslog: configuration of 127.127.20.0 failed

  • How Do I Configure the PFI Lines as input in PXI 6713 module

    Hi,
    I have PXI 6713 module in my PXI 1044 chassis. I have configured PXI 6713 module to geneate certain analogue signals to my board.
    Board inturn process this analogue signal and responds back the status signals through a status register on the board. In my application,the status bits in status register of  the board are mapped to the PFI 0:3 bits on PXI 6713 module ( pins 11,10 , 42 and 43).
    My query is how do i configue the PFI lines as input in PXI 6713 module to read these status bits ??
    May be below explanation could give you little bit more information w.r.to my query.
    When i use NI USB - 6008    module to read the same bits , since this device has 12 digital I/O lines, i could able to read the status bits in to the last 4 digital lines by configuring the those digital lines as input.
    In PXI 6713  module i have only 8 digital lines. These 8 digital lines i have used to send the digital signals to the board. I am left with no digital I/O lines. Hence i couldn't use these digital lines. I am left with only one option to use. Theya re PFI lines. Moreover the status bits in the pin out of board are mapped such that the bits can be read through the PFI lines.
    I am wondering do we have any example code to use  inorder to read these status bits on the board using the PFI lines.
    Please let me know if you need additional information to help me out.
    Thanks.
    Solved!
    Go to Solution.

    Hello There,
    When using the PFI pin as an input, you can individually configure each PFI for edge or level detection and for polarity selection.  This PFI information can be referenced in the DAQ Analog Output Series Manual on page 6-1 (http://www.ni.com/pdf/manuals/370735e.pdf).  Unfortunately, the PXI-6713 PFI lines are only capable of timing input and output signal for AI, AO, or counter/timer functions.  The option of creating static DI from the PFI lines is not available for the PXI-6713. However, some cards have this capability.  Newer National Instruments products with PFI lines have the option of setting PFI lines as:
    Static Digital Input 
    Static Digital Output
    Timing Input Signal for AI, AO, DI, DO, or counter/timer functions 
    Timing Output Signal from AI, AO, DI, DO, or counter/timer functions
    (http://digital.ni.com/public.nsf/allkb/14F20D79C649F8CD86256FBE005C2BC4)
    When set as static DIO, the PFI lines are assigned to a different port (eg. PFI0-7 is Port1).  More details about this can be referenced at:
    http://digital.ni.com/public.nsf/allkb/DA2D3CD0B8E8EE2A8625752F007596E1
    http://digital.ni.com/public.nsf/allkb/862567530005F09E8625677800577C27
    Regards,
    Roman Sandoval | National Instruments | RF Systems Engineer

  • Acroread 8.x CJK input problem

    See also https://bugzilla.novell.com/show_bug.cgi?id=353251
    CJK input doesnt work right in the English version of
    acroead 8.1.1 or the pre-release version 8.1.2.
    First of all, one needs to set
    export GTK_IM_MODULE=xim
    to make acroread react at all to the hotkey which triggers
    SCIM input (default hotkeys on openSUSE are Shift+Space and Control+Space).
    When GTK_IM_MODULE=scim or GTK_IM_MODULE=scim-bridge, acroread
    wont react to the hotkey which enables SCIM at all.
    With GTK_IM_MODULE=xim, the scim input method *can* be
    enabled by Shift+Space. But it doesnt work right.
    One can see correct Japanese in the popups shown by scim when
    converting phonetics to Chinese characters. But the predit string
    shows garbage which seems to resemble Arabic. And after comitting
    everything typed is converted to question marks (acroread 8.1.1)
    or boxes (acroread 8.1.2).
    I am talking about the *English* versions of acroread here, not
    localized Japanese versions like e.g.
    ftp://ftp.adobe.com/pub/adobe/reader/unix/8.x/8.1.1/jpn
    As it is not nice to have completely packages for each language, I
    hope that the different language versions can be merged into one in
    the long run.
    It would be very nice if there were only one basic version and to
    support other languages one only had to add fonts and translations and
    not exchange the binaries.

    Some more comments from: https://bugzilla.novell.com/show_bug.cgi?id=353251
    ------- Comment #5 From Gaurav Jain 2008-01-11 10:48:18 MST -------
    [ ] Private
    Mike,
    Could you confirm if everything is working fine with the Japanese Reader
    downloaded from the Reader website? That'll be really strange since the Viewer
    binaries should be identical in the English and Japanese versions. The only
    difference in the 2 installers should be in the fonts and the resource
    libraries.
    -vc
    ------- Comment #6 From Mike Fabian 2008-01-11 19:43:16 MST -------
    [ ] Private
    OK, I tried with the special Japanese version of acroread (8.1.1)
    as well.
    There is no difference in behaviour between the Japanese acroread
    8.1.1 and the English acroread 8.1.1 and the English acroread 8.1.1 as
    far as the input problem reported here is concerned! All of them show
    the problem as reported here.
    But I found that it depends on the locale. If acroead is started
    in ja_JP.UTF-8 locale
    LANG=ja_JP.UTF-8 acroread
    the problem occurs as reported here.
    However, if acroread is started in ja_JP.eucJP locale
    LANG=ja_JP.eucJP acroread
    the Japanese input works fine! Thats the same with the Japanese and
    the English version of acroread.
    As UTF-8 locales are the default nowadays on most Linux distributions,
    it is important that this works not only in legacy locales like
    ja_JP.eucJP but also in ja_JP.UTF-8.
    ------- Comment #7 From Gaurav Jain 2008-01-13 09:25:27 MST -------
    [ ] Private
    The screen-shot seems to indicate you are trying to enter japanese characters
    in a standard GTK+ edit field. The core Reader code actually doesn't interact
    much with the control during the process of entry of text.
    Could you try the following -
    1. When the default locale is eucJP, what happens in a standard edit field in
    some other GTK+ app., for instance gtk-demo? If you don't have gtk-demo, you
    could even try the same thing in the Open dialog in the Adobe Reader, where you
    type the file name.
    2. If you don't export GTK_IM_MODULE=xim, can you make the IME appear in some
    other GTK+ app. like the gtk-demo?
    My guess is this (atleast point 1 above) may be a problem with GTK+, though we
    are investigating this at our end as well. Maybe an issue with the fonts that
    get loaded when the locale is UTF-8, vs eucJP. That's the reason, the
    characters loaded from the IME are showing as question marks.
    -vc
    ------- Comment #8 From Mike Fabian 2008-01-14 05:18:46 MST -------
    [ ] Private
    Gaurav Jain> 1. When the default locale is eucJP, what happens in a
    Gaurav Jain> standard edit field in some other GTK+ app., for instance
    Gaurav Jain> gtk-demo?
    In gtk-demo, Japanese input works both for ja_JP.eucJP locale
    *and* for ja_JP.UTF-8 locale.
    And it works for all values of GTK_IM_MODULE which I tried
    (GTK_IM_MODULE=xim, GTK_IM_MODULE=scim, and GTK_IM_MODULE=scim-bridge.
    Gaurav Jain> If you don't have gtk-demo, you could even try
    Gaurav Jain> the same thing in the Open dialog in the Adobe Reader,
    Gaurav Jain> where you type the file name.
    Japanese input in the Open dialog of the Adobe Reader behaves
    exactly like in the search field of the Adobe Reader:
    - works fine in ja_JP.eucJP locale
    (for all values of GTK_IM_MODULE)
    - does not work in ja_JP.UTF-8 locale
    (not for any of the above mentioned values of GTK_IM_MODULE)
    Gaurav Jain> 2. If you don't export GTK_IM_MODULE=xim, can you make
    Gaurav Jain> the IME appear in some other GTK+ app. like the gtk-demo?
    In gtk-demo the IME appears and works fine.
    In acroread, the IME appears for all values of GTK_IM_MODULE (xim,
    scim, scim-bridge) and input works fine in ja_JP.eucJP locale. In
    ja_JP.UTF-8 locale, the IME appears as well and input seems to be
    possible but the result is garbage as in my screen shot.
    ------- Comment #9 From Gaurav Jain 2008-01-22 04:06:32 MST -------
    [ ] Private
    Hello Mike,
    I tried reproducing the bug on SLED 10, openSUSE 10.3, and openSUSE
    11.0(http://download.opensuse.org/distribution/11.0-Alpha1/iso/cd/openSUSE-11.0-Alpha1-GNO ME-i386.iso)
    but was unable to do so.I could see no difference in the behavior based on
    locale.Also,the problem of garbage predit strings in case of utf8 is not
    reproducible at my end.
    Could you please attach a screenshot of your SCIM setup settings,and the
    download location of openSUSE 11.0.
    Regards,
    Sanika
    ------- Comment #10 From Mike Fabian 2008-01-23 10:48:40 MST -------
    [ ] Private
    I found that the problem occurs *only* with GTK_IM_MODULE=xim,
    contrary to what I wrote in comment #8.
    That was my fault, because I still had
    # Workaround for http://rudin.suse.de:8888/show_bug.cgi?id=85416
    # (see comment #37):
    export GTK_IM_MODULE=xim
    patched into the beginn of the acroread start-script.
    Apparently this is not needed anymore, input using
    GTK_IM_MODULE=scim and GTK_IM_MODULE=scim-bridge seems to
    work fine now in acroread 8.1.2.
    *But* the problem I described here occurs with GTK_IM_MODULE=xim.
    Sanika,
    can you reproduce the problem with GTK_IM_MODULE=xim ?

  • Athalon XP clock speed problem

    I hav just installed a K7N2 Delta Mobo in my system, but I'm having CPU clock speed problems.
    The Bios and the Infoview program both state that the CPU clock is running at 1105 Mhz. I am sure this should be 1533 Mhz for an 1800xp.
    I updated the bios using MSI live update, and the first time I booted the system, the bios displayed the speed as 1533 Mhz, but on every subsequent boot up it is back down to 1105 Mhz again.
    How can I get my processor to run at the correct speed?
    Thanks
    Trev

    Sorry, just realised I have posted in the wrong forum.
    Please forgive me  

  • How to use a pll output to drive the clock input?

    when i use the following code to generate a clock(CLK00IN) from MHZIN to drive the normal logic,it work normally,but when i use it to drive the gtp clock  input  port CLK00,CLK00IN become 0 (no signal),why?
    IBUFG system_clk_ibufg
     .O                              (sys_tile0_gtpclkout0_0_to_cmt_i),
     .I                              (MHZIN)
    PLL_BASE #
             .CLKFBOUT_MULT     (10),
             .DIVCLK_DIVIDE     (1),
             .CLK_FEEDBACK      ("CLKFBOUT"),
             .CLKFBOUT_PHASE    (0),
             .COMPENSATION      ("SYSTEM_SYNCHRONOUS"),     
             .CLKIN_PERIOD     (25.0),
             .CLKOUT0_DIVIDE    (5),
             .CLKOUT0_PHASE     (0)
        system_clk_pll  
             .CLKIN             (sys_tile0_gtpclkout0_0_to_cmt_i),
             .CLKFBIN           (CLK00FB),
             .CLKOUT0           (CLK00OUT),
             .CLKOUT1           (),
             .CLKOUT2           (),
             .CLKOUT3           (),
             .CLKOUT4           (),
             .CLKOUT5           (),
             .CLKFBOUT          (CLK00FB),
             .LOCKED            (),
             .RST               (system_clk_pll_reset_i)
        BUFG sys_clkout0_bufg_i 
            .O              (CLK00IN),
            .I              (CLK00OUT)
        );

    Whether this experiment is done for test purposes?
    GT should not have any impact on CLK00IN signal in the code here. GT comes into picture after sys_clkout0_bufg_i. Please check if any optimization happened during implementation.

  • Device supports sample clock input

    I'm trying to find the best way to check if an NI device supports sample clock input. For example, I have a USB-6008 which does not support sample clock input. When I call DAQmxGetDevTerminals, I get "/Dev1/ai/StartTrigger, /Dev1/PFI0". For start trigger capability, I can call DAQmxGetDevAITrigUsage or look in the terminals string for "ai/StartTrigger". Should I search the string for "ai/SampleClock", or is there a function similar to DAQmxGetDevAITrigUsage?
    Thank you,
    CV

    Hey CV,
    From what I've seen, I don't know of a function similar to DAQmxGetDevAITrigUsage that works with the sample clock. It sounds like searching the terminal string may be your best bet for achieving the functionality you're looking for.

  • External Reference Clock on pci-5640R

    Hi. I want to clock my IF-RIO 5640R board, with an external reference clock at 2MHz frequency and i would to generate a 200 MHz clock inside the board (as the vcxo).
    If I set properly the parameters inside the "configuration timebase" vi, using the pll on the cdc7005, can i achieve this feature?
    How can i do?
    Thanks....

    If you use the fixed-personality driver, and you are feeling a little bit brave, you should be able to use ni5640R Configure Timebase.vi. It can be found (at least on my computer) at:
    C:\Program Files\National Instruments\LabVIEW 8.2\instr.lib\ni5640R\Driver\NI-5640R VIs
    Leave the defaults as they are, except change as follows:
    Ref Divider (M) = 2    (for 2 MHz)
    SMB Ext Ref Enable = True
    VCXO Control = PLL
    CP Enable = True
    That should be everything, to the best of my memory. You may have to enable the Invert CP bit.
    Hope this helps,
    Ed

  • Is SMA clock input supported in Spartan3E FPGA driver?

    Hi,
    I am using Labview 8.6 FPGA module for Spartan3E board.  Is SMA clock input supported in Spartan3E FPGA driver?
    If I use SMA clock input, Labview reports error (OnboardClock tag missing) when I compile VI. If I use both of two, it reports too much clock resources used. 
    Anyone can help this? Thanks.
    Bynan

    The error message is:
    Required tag was not found in the resource file.
    Target name: FPGA Target (Dev1, Spartan-3E Starter Board)
    Tag: OnboardClock

Maybe you are looking for

  • How much RAM can my computer hold?

    I have a Macbook Pro 2GHz, I believe one of the first models that came out. I have 1GB RAM, and while looking to buy a 2GB upgrade, I couldn't figure my computer's max memory. Can all of the Macbook Pros use 3GB of memory, or do the original models o

  • Clean Install from a Download

    I am trying to clean-up a Powerbook G4 to transfer to a friend. It has 10.4.8 installed that is acting funky. The DVD RW occasionally crashes on DVD's and writes a groove that seriously confuses the reader. I have downloaded all the avaialble updates

  • Why is Firefox 3.6.6 so slow?

    == Issue == I have another kind of problem with Firefox == Description == Firefox 3.6.6 is running REALLY slow. Firefox just updated (automatically - I seemed to have no choice) to 3.6.6 on my Mac (OS 10.5.4) and is now running unbearably slow. Every

  • Flashing vertical stripe in the middle of my 27" TB display

    I am getting a flashing vertical stripe in the middle of my 27" TB display, its a very random flash, can anyone help its driving me insane

  • IPOD/comp not seeing old library.

    Hello, I have an issue with my ipod. Recently, two new computers were brought into the house when the one that originally had iTunes died. Hooking the Ipod up the the new computers, I expected to be able to access the old library, but for some reason