Processing Scope Waveforms

Hi,
I want to process waveform coming from Scope.
I have to calculte positive peaks and get average value for values below 0.
Right Now I have done this
1- Replaced all values above 0 with 0.
2- Calculated there average values through basic DC RMS calculation VI.
3- Initialized an array with this average value.
4- Searched for maximum value between two consecutive 0 values and placed this value in already initialized array.
this whole processing is taking a lot of time.
Could you please provide a better way of doing this.
I tried to use Waveform Peak Detection VI but it is calculationg peaks through maxima and minima and giving me a lot of peak points although I reqiure only 1 in a period.
Please see attached I/p and Required O/P waveforms
Labview user
Attachments:
Ip Waveform.JPG ‏25 KB
required op.JPG ‏32 KB

Aojha wrote:
Hi,
I want to process waveform coming from Scope.
I have to calculte positive peaks and get average value for values below 0.
Right Now I have done this
1- Replaced all values above 0 with 0.
2- Calculated there average values through basic DC RMS calculation VI.
3- Initialized an array with this average value.
4- Searched for maximum value between two consecutive 0 values and placed this value in already initialized array.
this whole processing is taking a lot of time.
Could you please provide a better way of doing this.
I tried to use Waveform Peak Detection VI but it is calculationg peaks through maxima and minima and giving me a lot of peak points although I reqiure only 1 in a period.
Please see attached I/p and Required O/P waveforms
What you have described doesn't make sense given your waveform. You said "get positive peaks and get average value for values below 0". There are no positive peaks with amplitudes below 0 in your waveform. There are VALLEYS however.
But then your output waveform shows that you have marked 8 positive peaks of amplitude 2.5 (which are ABOVE 0) and zeroed all other values, including the first two periods of the waveform. Please explain.
“A child of five could understand this. Send someone to fetch a child of five.”
― Groucho Marx

Similar Messages

  • ADF FACES: process scope

    Is it possible to nest process scopes? I'm trying to implement a work pattern where a user will typically follow a chain of detail pages and then return back to an original page. This pattern needs to be available from pages within the each detail page set. Thus, a nested set of process scopes would cleanly implement this.
    Bowever, in trying to to use process scopes, the first returnFromProcess call terminates all of the processes and the next call to returnFromProcess generates an illegate state exception.
    Trying to solve this myself in my backing bean code, I feel that I'm recreating a lot of the infrastructure already in place for basic process scope handling.
    Is there any way to make multiple, nested process scopes work?
    Thanks.

    In HTTP, separate frames are basically no different from separate windows, so this is a limitation of process scope.

  • How long is the life time of process scope variable ?

    Hi All,
    How long is the life time of process scope variable befoer it expires ?
    Is it the same with the life time of session scope ?
    How can we set the value in OAS when deployment ?
    Thank you,
    xtanto

    Hi,
    http://www.oracle.com/technology/products/jdev/htdocs/partners/addins/exchange/jsf/doc/devguide/communicatingBetweenPages.html
    " Finally, processScope never empties itself; the only way to clear processScope is to manually force it to clear:
    AdfFacesContext afContext = AdfFacesContext.getCurrentInstance();
    afContext.getProcessScope().clear();
    Otherwise the variable lives for the duration of the window process. The referenced document above also shows how to set a variable to process scope
    Frank

  • Process scope values get cleared randomly

    Hi,
    In my backing bean i pass dynamic values as bind variables to a ViewObject
    from the user selection screen form data.
    I do this using ValueChangeListener method for each field.
    In this method i put the current value in the processScope.
    So in the pageDef file i give as:
    <NamedData NDName="ContractTypeBind" NDType="java.lang.String"
    NDValue="${processScope.contractType}"/>
    <NamedData NDName="ContractPseNumberBind" NDType="java.lang.String"
    NDValue="${processScope.pseNumber}"/>
    After passing values like this, i have placed an LOV in the page so that it will populate dynamic records from the DB based on
    the contract type and pse number entered by user.
    If i use the LOV button for 3 -4 times randomly the process scope values are getting null.
    I mean process scope values are getting lost.
    why is this happenig can any one tell?
    Thanks,
    Sanjaykar

    Hi,
    I tried without using any valuechange listener and its absolutely working fine.
    No need of putting any values in processScope and blah blah!!
    I am passing values in backing bean for the ExecuteWithParams this way:
    BindingContainer bindings = getBindings();
    OperationBinding operationBinding =
    bindings.getOperationBinding("ProcurementLOVParams");
    Map map = operationBinding.getParamsMap();
    map.put("ContractTypeBind", conTypeChoice.getValue().toString());
    map.put("ContractPseBind", pseNumber.getValue().toString());
    Object result = operationBinding.execute();
    And this is working fine..
    Thanks,
    Sanjaykar

  • FMS3 Process Scope

    I have a suite of FMS apps that I want to deploy across
    different VHosts on ws2003. The principal app is deployed in packs
    of 15 instances, although there are about 5 smaller 'helper' apps
    which are mainly singletons. Each pack is a running unit - so from
    a process isolation point of view, setting process scope to "app"
    seems fine. But this raises the following questions:
    1) What are the performance implications of each scope - my
    guess is that 'inst' scope is expensive. But are there scalability
    or resource implications for each scope? I'd be interested to know
    if scope can play an active part in tuning my apps.
    2) If scope is set at the application level, and I set it to
    'inst', and I do this for each application, does this mean that if
    I have 10 apps with 10 instances each, that I create 100
    FMSCore.exe processes? And does it mean that if I set 'numproc' to
    3 for each of them that I'd create 30 processes?

    Hi,
    I tried without using any valuechange listener and its absolutely working fine.
    No need of putting any values in processScope and blah blah!!
    I am passing values in backing bean for the ExecuteWithParams this way:
    BindingContainer bindings = getBindings();
    OperationBinding operationBinding =
    bindings.getOperationBinding("ProcurementLOVParams");
    Map map = operationBinding.getParamsMap();
    map.put("ContractTypeBind", conTypeChoice.getValue().toString());
    map.put("ContractPseBind", pseNumber.getValue().toString());
    Object result = operationBinding.execute();
    And this is working fine..
    Thanks,
    Sanjaykar

  • Application.xml - Process- Scope

    server: Dual Processor Dual Core Xeon - 3.4GHz
    Ram: 2 GB DDRII
    Operating System: CentOS 4 (32 bit) -FEDORA X.X -RED HAT -
    ECC.
    I changed the scope in the application.xml to "inst". But fms
    doesn't create more than 40 processes.
    here is the error from the log file.
    May 17 10:15:40 Server[15925]: Failed to create process
    condition: errno(28).
    May 17 10:16:18 Server[15944]: Failed to create process
    condition: errno(28).
    May 17 10:18:44 Server[16591]: Failed to create process
    condition: errno(28).
    May 17 10:20:27 Server[16854]: Failed to create process
    condition: errno(28).
    May 17 10:21:40 Server[17120]: Failed to create process
    condition: errno(28).
    why? Help meeeeeeeee

    Update ; it doesn't occur when you put in "app" but "inst" at
    configuration parameter "<Scope>" in Application.xml
    --> if you want to create a core per instance, instead of
    a core per process (which is highly desirable to prevent individual
    applications crashing eachother)

  • NI-scope waveform measurement

    Newbie here.
    I want to acquire a waveform from my digitizer using LabView and NI-scope and perform a scalar measurement on the same waveform.  Looking at the example niscope Ex Measurement Library.vi it appears that a waveform is acquired and displayed in a graph and then a second waveform is acquired and measurements are performed on it.  Is there an NI-scope subvi that performs a measurement and returns the acquired waveform along with the measurement?  Do I need to just acquire the waveform and perform the measurements using some other subvi's?
    Thanks.

    Hey Newbie ,
    If you remove the "niScope Multi Read Cluster.vi" from the "niScope EX Measurement Library.vi" and try to run it you will get an error that says:
    An acquisition has not been initiated.
    The "Multi Fetch Measurement Stats.vi" does not actually acquire a new set of points from the digitizer but instead uses the set the multi read VI has already acquired (look at the niScope help file for more information on Fetch VIs). The wording for these two VIs is subtly different in the help documentation. The read VI acquires a new waveform and the measurement VI obtains the (currently acquired) waveform. There are two VIs that acquire data and perform a measurement all within the same VI. They are called "niScope Read Measurement.vi" and "niScope Multi Read Measurment.vi." I hope this gets you pointed in the right direction.
    Regards,
    Adam
    National Instruments
    Adam
    National Instruments
    Applications Engineer

  • Need advise for processing DAQmx waveform array

    Hello All,
    I am using following code to continuously acquire data from 2 channels, then export to a text file for analysis with Execel or Origin. The data file should contain four columns:
    t1   Y1    t2   Y2
    I know it is convenient to use Write Measurement File express VI, but since this express VI open and close file in each loop execution, I prefer to use low-level File I/O functions to save computer resources (I need to reserve resources for image acquisition and analyses).
    The following attached VI just show the overall frame of the code, but is not executable now. I hope to get some suggestions from this forum to finish it.
    Here, I have two main questions, thank you in advance if you can provide hints to any of them.
    (1) How to get correct t0 and dt values for the waveform data
     I am using 25Hz pulse signal to externally trigger DAQmx  through PFI0.  I set 100Hz for the rate input of the DAQmx timing VI  since  LabVIEW help  says "If you use an external source for the Sample Clock, set this input to the
    maximum expected rate of that clock. " .
     When I use Write Measurement File express VI,  I found in the resulting text file, the dt is 0.01 instead of 0.04, i.e., the value of dt in the waveform data is not determined by external PFI0 signal, but by the rate input for the DAQmx timing VI.  Also, I found t0 does not always start from  zero.  However, from the display of data plot, I am sure the acquisition is at 25Hz.
    Some people in this forum ever told me to use Get Waveform Component and Build waveform functions to manually rebuild the waveform data in order to get correct t0 and dt. I tried and no luck to succeed.  Anyone can give me more detailed hints?
    (2) How to write data of 'NChan NSample' at one time in a loop
    I have two channel DAQmx, and for each channel there are several data points.  I plan to use following method to export data:
    Suppose in one loop DAQmx Read VI read 10 samples from each channel, so I assume I will get four arrays of 10 elements:
    t_channel_1, Y_channel_1, t_channel_2, Y_channel_2.
    Then I use a loop structure of N=10 to concatenate t1, tab,Y1,tab, t2,tab, Y2,return in 'element by element' mode, i.e., do 10 times of string concatenate and write to text file.
    I don't know whether above method is effective. Anyone can advise a better one?
    (3) Convert from timestamp to elapsed time in milisecond
    In the final text file, the time column should be the values of elapsed time in milisecond since the start of acquisition. So, I think I need to get the timestamp of the first data point so that for later data points I can use Elapsed Time express VI  to extract elapsed time in milisecond. However, I don't know how to get the timestamp at the starting of acquisition. Please advise if you know it.
    Sincerely,
    Dejun
    Message Edited by Dejun on 08-30-2007 10:34 AM
    Attachments:
    code.jpg ‏86 KB
    code.vi ‏49 KB

    tbob wrote:
    Ravens Fan:
    Read his post again:
    "I am using 25Hz pulse signal to externally trigger DAQmx  through PFI0.  I set 100Hz for the rate input of the DAQmx timing VI  since  LabVIEW help  says "If you use an external source for the Sample Clock, set this input to the maximum expected rate of that clock. " .
     When I use Write Measurement File express VI,  I found in the resulting text file, the dt is 0.01 instead of 0.04, i.e., the value of dt in the waveform data is not determined by external PFI0 signal, but by the rate input for the DAQmx timing VI.  Also, I found t0 does not always start from  zero.  However, from the display of data plot, I am sure the acquisition is at 25Hz."
    He says in the 1st paragraph that he sets theDAQmx timing to 100Hz because that is his maximum expected clock rate.  In the 2nd paragraph he says that in his measurement file dt is 0.01 instead of 0.04.  This indicates that the dt value is determined by the DAQmx timing rate, not the PFI0 clock rate.  I am thinking that he should set the DAQmx timing to match the PFI0 timing, 25Hz.
    Maybe this would work.
    You're right, I did misread what he said. But, I still have questions about what he said.  " I set 100Hz for the rate input of the DAQmx timing VI  ".  The code shows a rate input of 25 to the timing VI.  And nowhere else do I see a setting of 100Hz.  I ran his code (used a simulated device) and put an indicator on the to dt's in the clusters.  The came up as .04 which is what i would expect.  It is hard to comment what is going on in the file since there is nothing being send to the write text file VI.
    Message Edited by Ravens Fan on 08-31-2007 12:18 PM

  • Address from outside the oracle process' scope

    Hi,
    Doing research a problem in oracle I've got an memory address which "not exists". I'm not an linux architecture expert so maybe somebody knows simply answer how to recognize this address.
    Oracle 10gR2, Oracle Enterprise Linux 2.6.18-128.el5
      1* select * from x$kqfta where indx=0
    SQL> /
    ADDR           INDX    INST_ID   KQFTAOBJ   KQFTAVER KQFTANAM                         KQFTATYP   KQFTAFLG   KQFTARSZ   KQFTACOC
    0C49BF20          0          1 4294950912          2 X$KQFTA                                 1          0         60         10Column ADDR points to memory where the record data is kept.
    Let's check it.
    SQL> oradebug setmypid
    Statement processed.
    SQL> oradebug peek 0x0C49BF20 100
    [C49BF20, C49BF84) = 24580007 5446514B 00000041 00000000 00000000 00000000 00000000 00000000 00000001 0000003C 00000000 0000000A 00000000 FFFFC000 00000002 ...Data seems to be ok.
    Next I tried to find what kind of memory is pointed by ADDR? (sga, pga, ...) and I failed.
      1* select spid from v$process where addr=(select paddr from v$session where sid=(select sid from v$mystat where rownum=1))
    SQL> /
    SPID
    20067
    [oracle@node0 ~]$ pmap -x 20067
    20067:   oracleorcl (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    Address   Kbytes     RSS    Anon  Locked Mode   Mapping
    +<cut>+
    00d59000    7028       -       -       - r-x--  libjox10.so
    01436000     260       -       -       - rwx--  libjox10.so
    01477000       4       -       -       - rwx--    [ anon ]
    08048000   77032       -       -       - r-x--  oracle
    0cb82000     324       -       -       - rwx--  oracle
    0cbd3000     120       -       -       - rwx--    [ anon ]
    0ce3b000    1088       -       -       - rwx--    [ anon ]
    20000000  528384       -       -       - rwxs-    [ shmid=0x55000d ]
    b7ab7000     640       -       -       - --x--  zero
    b7b57000     128       -       -       - rwx--  zero
    b7b77000    1280       -       -       - --x--  zero
    +<cut>+
    b7fb0000      28       -       -       - rwx--  zero
    bfac6000     200       -       -       - rwx--    [ stack ]
    total kB  627220       -       -       -The question is:
    Where is memory pointed by ADDR? How to find it by linux tools?
    Thanks for any help,
    Bartek

    Er, no, address 0x0C49BF20 is within the process:
    $ pmap -x 20067
    20067:   oracleorcl (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    Address   Kbytes     RSS    Anon  Locked Mode   Mapping
    08048000   77032       -       -       - r-x--  oracle
    0cb82000     324       -       -       - rwx--  oracleSo, there is a memory region at 08048000 that is 77032K long making the memory
    region span addresses 0x08048000-0x0CB81FFF and 0x0C49BF20 is within that
    range. The next question is "what is at that memory region" and for that you
    should look at:
    $ less /proc/20067/smapsand find that region.
    HTH

  • Do you have a transaction scope that spans multiple requests.

    We have and application that includes multiple tabs, which are really iframe instances. We need to maintain state for the entire time the tab is open, which may be across multiple requests.
    I am not comfortable making all our backing beans "session" scope, and making them "request" forces us to do lots of work (DB access etc...) on every post-back to re-initialize the backing bean.
    I have been looking at both Shale and JBoss SEAM to give me this "conversational" scope. I have looked at "process" scope, however we may have the same backing bean in use for multiple tabs, therefore would need it linked to something like the viewId.
    Does ADF plan on enhancing the "process" scope functionality or is it OK to add SEAM or shale at the front-end of the ADF processing lifecycle?
    Your guidance would be appreciated.

    The processScope functionality seems pretty crude.
    I was looking to define which data elements of the backing bean need to be stored (maybe using annotations) and have them restored automatically before the APPLY_REQUEST lifecycle gets initiated. I can write this functionality, however I was looking for a more robust solution.

  • Do you have a "transaction" scope for multi-request views

    We have and application that includes multiple tabs, which are really iframe instances. We need to maintain state for the entire time the tab is open, which may be across multiple requests.
    I am not comfortable making all our backing beans "session" scope, and making them "request" forces us to do lots of work (DB access etc...) on every post-back to re-initialize the backing bean.
    I have been looking at both Shale and JBoss SEAM to give me this "conversational" scope. I have looked at "process" scope, however we may have the same backing bean in use for multiple tabs, therefore would need it linked to something like the viewId.
    Does ADF plan on enhancing the "process" scope functionality or is it OK to add SEAM or shale at the front-end of the ADF processing lifecycle?
    Your guidance would be appreciated.

    Hi,
    have a look at the ADF developer documentation on OTN and read the chapters on taskflows. You can have beans in a taskflow that have their scope set to backing bean.
    If a taskflow is then used multiple times in a region, the backing bean scope makes sure the two instaces are isolated
    You can't mix and match Seam or Shale lifecycle with ADF Faces RC and vice versa.
    Frank

  • Design Question: Multi-page Enrollment Process?

    I'm thinking of 3 different possible design approaches for a multi-page customer enrollment process and wanted to know if anyone had any recommendations? There are 3-5 pages in the sign-up process, each page containing form fields the user inputs information to. The main design issues are around how to save the data between pages, not committing the data to the database unless the user completes the entire process, and handling cases where the user does not complete the entire process:
    1. Store form field inputs in temporary database table; if user completes process copy data to real database table. Can delete rows from temporary table as part of nightly cleanup process (i.e., rows where customer bails part way through enrollment)
    2. Use process-Scope variables to hold the user input; commit data to database after user completes the process
    3. Store user input in View Objects with transient fields as a way to preserve the data; commit the data to the database after user completes the process
    Any suggestions or examples?
    Thanks
    Using JDev 10.1.3 ADF BC / JSF

    sorry, to revive this "old" thread, but could someone please elaborate on this:
    Implement a state-machine for each workflow. Example: basic-info->payment-info->success constitute one workflow for me. This state-machine would be a stateful Java class (holding the current state/page) implementing a StateMachine interface with a method: canTransitionTo(String pageName). Declare this as a session-scoped managed bean. You can optionally wire this to your other controller backing beans for modifying the current state when user presses "Next" in the individual pages.
    So basically, you make an interface with a single method canTransitionTo(String pageName) [btw, this seems like it returns a boolean ... although wouldn't a method like "getAvailableTransitions(String pageName)" be more appropriate?] Then your "State machine" would have some variable that holds the current state, for example, "page one".
    My question is this. Suppose we have 3 pages (much like the example), page one -> page two -> page three
    In the page three Action, how do we check to make sure they are allowed to come to this page? The State Machine would not have any state in it yet ... (maybe this was what point(s) 2 and/or 3 was saying, but I couldn't understand exactly what he was saying... PhaseListener?)
    Thanks

  • Question to core-processes

    We currently have following configuration set regarding processes:
    <Process>
        <Scope>app</Scope>
        <Distribute numprocs="11">insts</Distribute>
        <LifeTime>
          <RollOver></RollOver>
          <MaxCores></MaxCores>
        </LifeTime>
      <MaxFailures>5</MaxFailures>
                        <RecoveryTime>5</RecoveryTime>
    </Process>
    We are running non-stateless-applications so we can't use the roll-over-configuration.
    This leads to instances running for over 30 days (with several connects and disconnects) resulting in core-processes, that don't seem to work properly after about 30 days. They still respond but use a high amount of virtual memory and seem to have utilization-related problems.
    What we could do is to manually stop an instance via the application itself, if a certain state is matching. But we would have to immediately restart the instance.
    With the configuration above, is it possible, that when i restart an instance via the Flash Media Administration Console it is assigned to the same core-process or will FMS assign the instance to another core-process?
    Is there a way to ensure, that the instance gets assigned to a different core-process, so the old core process will be stopped?
    Any help is appreciated.
    Best regards
    Suha

    Well i guess so, too, but it would help to be sure.
    Just to make sure: We don't want to run the same instance in the same old core-process! I want to make sure, that if i terminate an instance via code on a certain state it gets assigned to fresh new core-process.
    I once tried to simply restart an instance via Flash Media Admin Console, that was running a long time and did not write any more instance-log-files. But after restart it still did not write any more log-data. So my guess was, that the restarted instance was not assigned to a new core-process but to the same old core-process, that was for whatever reason not able to write any more log-data.
    Other instances of the same application running on the same machine still did write log-data.
    If i did not get the documentation wrong, our current configuration would tell FMS to assign all instances of one application to up to eleven core-processes. How can we make sure, that an old, long-running core-process gets terminated? I guess, that all still running instances in this process have to be stopped.
    But we can't do it that way, we could only try to restart instances, which only makes sense, when it is for sure, that a restarted application-instance will not be assigned to the same old process neither newly started instances will. Is there a way, without using the rollover-configuratiuon to achieve this?

  • Waveform editor clips won't update

    hello, i am very frustrated. haha. whenever i double click an audio file, it brings up the waveform editor. i add my effects but when i save it or go back to the soundtrack project, it isn't updated. how do i save my edited clips and get them to have the effects on them in the main project??
    thanks!

    It is very frustrating.
    Here's what I can see.
    If I import a file. Double click that, do some process to it, close the file editor. What happens is that the entire file is processed, the waveform doesn't update, and if I save and re-open the project the effect is gone. The waveform however now looks like it's processed even though it isn't.
    Apparently the application is not saving this audio file project. I think it's not right that it doesn't or at least doesn't warn you that it isn't going to, but that's another thing in the list to be fixed.
    +What you need to do is make or save a new stap file. There's 2 ways to do this.+
    EITHER
    After you process the clip in the Audio Editor, Save. It will prompt you to save it as an Audio project.
    Then the waveform will update, and everything will save properly.
    OR
    First +Menu->Clip>Replace with Independent Audio File Project+ which will create a new stap, then process, again the waveform updates properly and everything is saved and there when re-opened.
    Try experimenting to see the difference.

  • High Speed Acquisition, Processing, Then output Latency on a PC

    I am trying to specify hardware to purchase. I have a project where the objective is acquire two analog channels at 10 MS/s (12 bit/ or 16bit data transfers). (e.g.. PCI-5105) Process one waveform to modify it, write the other to file for low duty cycle analysis, then output the modified acquired waveform into one 12 bit KHz analog output and 9 digital outputs simultaneously (various multifunction daq devices meet this spec).  These inputs and outputs must happen over a long time. Many minutes uninterrupted. The digital outputs must be accurate to 0.1 us, that is true to the incoming waveform characteristics. (10MS/s).  I have created a bandwidth budget for all the hardware involved.  Incoming is 20 MS/s (2 channels at 10MS/s), PCI bus bandwidth is 133 MS/s divided by the number of devices transmitting, Write to hard drive is about 30 MB/s (15 MS/s). The processor is GHz with windows. I have drafted a LabVIEW code methodology using producer/consumer queuing, and win32 file writing and 4 while loops with various appllied execution thread priorities. I expect the output would need to be delayed with a buffer to allow for a variable delay due to processing.  I would like to estimate the "latency" between the input and the output.  that is what would be the shortest sustained period of time between the input and the output stream.
    The incoming waveform is a square-wave-like pulse stream.  The conversion is to find a pulse-width and amplitude of each pulse adjust the timing slightly with fixed parameters and convert it by then algorithmicaly configuring 1 analog output and 9 digital outputs in a modified way that has been synchronized to the incoming pulses.
    The question is do I wish to try to do this on a PC or should I go with a PXI based RT system.  Even if on a PXI RT system then how can the question of what will the latency be needs to be answered before purchase?  Suppoing even the RT system is inadequate.
    The desired range of latency is hoped to be in the millisecond range.
    Experienced judgement would be appreciated.
    Thanks John

    Manliff,
    Many vendors can provide an ADC evaluation board.  This board automatically provides a set clock rate, or allows an external clock, to control the conversions.
    One particular board I have has a parallel setup built in.  This board is only running at ~15kS/s at 18 bits, but I know there are much faster boards out there.  The parallel setup allows me to have each bit have a dedicated digital line.  With an R-Series card, I can poll these digital lines simultaneously at 15kHz and read the conversions.  The same method would be used for faster ADC's.
    One thing to keep in mind...many ADC's have a serial output interface, so if you are trying to use an ADC at 10MHz and 16 bits, then the bits are actually coming out at over 160MHz on one DIO line, which is faster than the R-Series can sample.  That is why it is good to look for an evaluation board that can output a parallel signal for you.  Since it has the serial conversion and clock rates built in, it is easy to just poll all the digital lines at once instead of a single line at a very fast rate.
    TI also has many ADC...here are some 16-bit ADCs with built-in parallel interfaces.  I didn't see an evaluation board, so you may have to build some signal conditioning yourself, however. I may order one to evaluate in order to do it myself.
    Rob K
    Measurements Mechanical Engineer (C-Series, USB X-Series)
    National Instruments
    CompactRIO Developers Guide
    CompactRIO Out of the Box Video

Maybe you are looking for

  • Voicemail notifications and visual voicemail stopped working

    On September 17th (a few days ago) my phone stopped giving me voicemail notifications and visual voicemail. I have tried restarting my phone, resetting, resetting "all settings" and even calling 611 (I was put on hold for 15 minutes and just gave up

  • Lenovo G50 BIOS in this system is not fully ACPI compliant

    Hello everybody ! I kinda have a problem as the title states.My neighbour has a Lenovo G50 she bought and came with windows 8.1 x64 ( which she doesn't like at all and in my opinion win7 is far more superior ). The thing is I wanted to install her th

  • SAP NetWeaver ABAP 7.02 Trial Version - CPRXRPM

    Hello Experts, I have installed the "SAP NetWeaver ABAP 7.02 Trial Version" from SDN downloads and as you know it comes with SAP_ABA, SAP_Basis, PI_Basis, SAP_BW components. I still need CPRXRPM component. is it possible to add the CPRXRPM component

  • Images not displaying in web

    sorry duplicate post. Wish there was some way I could delete it.

  • My Mac goes in hibernate, not in sleep mode

    Hello everyone When my Mac is connected with the charger and I put him in sleep-mode, my Mac works, but after a couple of hours, it goes in hibernate. Why? What should I do to solve this problem? I don't have this problem when my MacBook Pro is disco