Accessing ai0 to ai16 on DAQ and performing operations on them

Hi all,
I have a problem, I am using an NI-Daq interface board, I am able to get the waveforms ( plot of the 16 channels collectively) and work the interface correctly, but the problem is  I have to perform different operations on each of the 16 channels, I am finding it difficult to access each channel individually and perform the operations. Also, this deals with continuous signal acquisition and generation which needs to be synchronized as well, if anyone has any suggesting a way to access the 16 channels and perform the operations, p[lease help me out.
I tried pushing the data into a queue , but the the program indicates memory buffer full error after a few hours of run.
Thank You

@ PCSNF  I will getthe code asap...i have t in another system rite now..
@smercurio_fc- i am dealing with dynamic data..!!......"operations"are primarily  peak detection, I also have to sort outsets of data which Ithink i need 2 use arrays for that....so in a way itis amisture of all data types available in labview!!...
i couldnt find the split channels function....!>...

Similar Messages

  • Perform Operation on AD User Without DN

    The unique identifier we use throughout our organisation is the users employee ID which has traditionally been put in to the Initials field on the General tab.
    When HR are processing employees that exit the company, they have easy access to the users name and ID but not to their username which unfortunately is not at all what you would call a a consistent format.
    I have plenty of scripts that will perform AD operations using the DN, but in this case I need it to look up a user based on their "initials" listed in a CSV and move those users to a different OU.
    This is the script I have for doing it with their username but I have no idea if I can just look up the initials field and perform operations based on that instead or if it needs to be a two step process that finds the user, retrieves their DN and performs
    the operation on that.
    # Specify target OU.
    $TargetOU = "ou=NewUsers,ou=West,dc=MyDomain,dc=com"
    # Read user sAMAccountNames from csv file (field labeled "Name").
    Import-Csv -Path Users.csv | ForEach-Object {
        # Retrieve DN of User.
        $UserDN = (Get-ADUser -Identity $_.Name).distinguishedName
        # Move user to target OU.
        Move-ADObject -Identity $UserDN -TargetPath $TargetOU
    }

    It's going to be horribly slow if you have to search for each user by initials.  
    I'd build a lookup table first, and then work out of that:
    $EmpID = @{}
    Get-ADUser -Filter * -Properties Initials |
    foreach { $EmpID[$_.Initials] = $_.distinguishedname }
    # Specify target OU.
    $TargetOU = "ou=NewUsers,ou=West,dc=MyDomain,dc=com"
    # Read user EmployeeID from csv file ("Initials" in csv)
    Import-Csv -Path Users.csv | ForEach-Object {
    # Retrieve DN of User.
    $UserDN = $EmpID[$_.Initials]
    # Move user to target OU.
    Move-ADObject -Identity $UserDN -TargetPath $TargetOU
    You might want to add a -SearchBase to limit the Get-ADUser if you can.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • BPEL consuming wsdl & perform operations in DB.

    Hi,
    Please help me out its very urgent.I have wsdl's out of which i have to consume this wsdl's and perform operations on DB and read the data and send it back to the DB
    e.g. i have two databases i have read from one DB perform operation given in the wsdl's store the info in other DB then send it back to the same DB

    Irfan,
    Sorry..I didn't realy understand your need
    You want to read from DB A and insert the data into DB B?
    Arik

  • Windows 7 won't allow me access to some of my files and folders

    I installed a new copy of Windows 7 Professional SP1 on a new PC several months ago. I then restored my documents from a backup made on my old XP system. Ever since installation, I have had problems with Windows denying me access to some of my files
    and folders. I have full administrative rights on the machine. The various problems I've had, and steps I've taken to resolve them are:
    1. Try to open a file and get a message telling me I need permission from Steve to open this file. I am Steve.
    2.Try to open a file and get a message telling me I don't have permission to open it, but click this button to get permanent permission. I click the button, but still don't have permission.
    3. I then started opening the security settings for individual files and setting myself to have full access, and as the owner. This worked for individual files.
    4. I then tried setting the same security settings for entire folders. This didn't work and didn't improve anything.
    5. I tried setting the security settings so that Administrators owned the files/folders (I am an administrator). This also didn't achieve anything.
    6. After several months and still getting these messages, I got sick of trying to overcome them and decided to apply these security settings to my entire C: drive. I set myself to have full access and to be the owner of the C: root directory and all sub-objects.
    7. I inserted my Windows installation disk and tried to run the System Repair. It told me that System Repair was not compatible with the version of Windows I was running! WTF???
    Not only did setting permissions for the entire C: drive not help me, it has made matters a WHOLE lot worse. I now don't have permission to open almost all my files/folders. Most applications I try to run (Word, Excel, etc.) either tell me they can't open
    a file or let me open and edit it, then tell me I don't have permission to save to that location. When I press the Send/Receive button in Outlook, it now tells me I don't have permission to perform that action. I've tried going back to a restore point before
    I made this change, but that didn't change anything either.
    This problem has been driving me insane for months and now my PC is almost completely unusable and I'm considering a disk format and re-install Windows. When I initially installed it, so many problems came up in the first few weeks that I'm very reluctant
    to go through this procedure again. I spent hours and hours trawling the web looking for solutions to things that didn't work.
    Does anyone have any good news for me? Because as far as I'm concerned, Windows 7 is so far a piece of garbage.

    Hi Steve1904,
    So you have use the backup and Restore to restore your files from Windows XP to Windows 7 directly?
    This should be considered not work.
    If you would like to transfer files between Windows XP and Windows 7, you need another tool called Windows Easy Transfer.
    See the article below if you would like to upgrade from Windows XP to Windows 7:
    Upgrading from Windows XP to Windows 7
    If possible, follow the steps there, then things should be OK.
    Best regards
    Michael Shao
    TechNet Community Support

  • How can I set up a guest access point with a Time Capsule and an Airport Extreme? I am using a Telus router with the Time Capsule used as a wireless access point (bridge mode). I don't want the guest access point to have access to my network.

    How can I set up a guest access point with a Time Capsule and an Airport Extreme? I am using a Telus router with the Time Capsule used as a wireless access point (bridge mode). I don't want the guest access point to have access to my network.

    The Guest Network function of the Time Capsule and AirPort Extreme cannot be enabled when the device is in Bridge Mode. Unfortunately, with another router...the Telus...upstream on your network, Bridge Mode is indicated as the correct setting for all other routers on the network.
    If you can replace the Telus gateway with a simple modem (that performs no routing functions), you should be able to configure either the Time Capsule or the AirPort Extreme....whichever is connected to the modem....to provide a Guest Network.

  • Business Intelligence and performance point problem

    Hi Everyone,
    Please does anyone know why creating a dashboard from a Sharepoint list is such a hassle. I have configured performance point services, a secure store and a business intelligent website that has a data connection library and  Performance Point content
    list . Unfortunately anytime I try to use dashboard designer to create a new data source and specify the site settings(using unattended account method) I can't save it. It gives me an error claiming that I either don't have permissions or the data source
    doesn't exist.
    A search around the web for answers doesn't seems to solve my problem as it suggests that the problem is associate with many things which I'm not clear about. Please could someone who is good at this at least tell me the critical things to check and the
    clearest way to set this up.
    Thanks,
    Dominic

    Hi Dominic,
    According to your error " I either don't have permissions or the data source doesn't exist", it should be related to the unattended account permission.
    Once the unattended service account has been configured, you must grant that account access to your data sources:
    For SQL Server data, the account must have a SQL logon with db_datareader permissions on each database that you want to access.
    For SQL Server Analysis Services data, the account must have read access to the cube or an appropriate portion of the cube, depending on your needs.
    For Excel Services data, the account must have access to the Excel workbook in a SharePoint document library.
    For data in a SharePoint list, the account must have read access to the list.
    Reference:
    https://technet.microsoft.com/en-us/library/ee836145.aspx
    Also you can have a look at the blog:
    http://www.chrismcnulty.net/blog/Lists/Posts/Post.aspx?ID=74
    https://yossidahan.wordpress.com/2012/08/14/cant-get-ssas-databases-to-appear-in-performance-point-dashboard-designer-check-you-adomd-net-version/
    Best Regards,
    Eric
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Calculating the memory and performance of a oracle query

    Hi,
    I am now developing application in java with oracle as a back-end. In my application i require lot of queries to be executed. Hence, the system is getting is slow due to queries.
    So, i planned to develop one Stand-alone application in java, that should show the statistics like, memory and performance. for ex:- if i enter one SQL query in the text box, my standalone application should display, the processing time it requires to fetch the values and the memory is used for that query.
    Can anybody give ideas, suggestion, etc etc...
    Thanks in Advance
    Regards,
    Rajkumar

    This is now Oracle question, not JDBC question. :)
    Followings are sample for explain plan/autotrace/SQL*Trace.
    (You really need to read stuffs like Oracle SQL Tuning books...)
    SQL> create table a as select object_id, object_name from all_objects
    2 where rownum <= 100;
    Table created.
    SQL> create index a_idx on a(object_id);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'A');
    SQL>  explain plan for select from a where object_id = 1;*
    Explained.
    SQL> select from table(dbms_xplan.display());*
    PLAN_TABLE_OUTPUT
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    SQL> set autot on
    SQL> select * from a where object_id = 1;
    no rows selected
    Execution Plan
    Plan hash value: 3632291705
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS BY INDEX ROWID| A | 1 | 11 | 2 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | A_IDX | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("OBJECT_ID"=1)
    Statistics
    1 recursive calls
    0 db block gets
    1 consistent gets
    0 physical reads
    0 redo size
    395 bytes sent via SQL*Net to client
    481 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);
    -- SQL> alter session set events '10046 trace name context forever, level 12';
    -- SQL> alter session set sql_trace = true;
    PL/SQL procedure successfully completed.
    SQL> select * from a where object_id = 1;
    no rows selected
    * SQL> exec dbms_monitor.session_trace_disable(null, null);*
    -- SQL> alter session set events '10046 trace name context off';
    -- SQL> alter session set sql_trace = false;
    PL/SQL procedure successfully completed.
    SQL> show parameter user_dump_dest
    */home/oracle/admin/WASDB/udump*
    SQL>host
    JOSS:oracle:/home/oracle:!> cd /home/oracle/admin/WASDB/udump
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> ls -lrt
    -rw-r----- 1 oracle dba 2481 Oct 11 16:38 wasdb_ora_21745.trc
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> tkprof wasdb_ora_21745.trc trc.out
    TKPROF: Release 10.2.0.3.0 - Production on Thu Oct 11 16:40:44 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    JOSS:oracle:/home/oracle/admin/WASDB/udump:!> vi trc.out
    select *
    from
    a where object_id = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 1 0 0
    total 3 0.00 0.00 0 1 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 55
    Rows Row Source Operation
    0 TABLE ACCESS BY INDEX ROWID A (cr=1 pr=0 pw=0 time=45 us)
    0 INDEX RANGE SCAN A_IDX (cr=1 pr=0 pw=0 time=39 us)(object id 65441)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 25.01 25.01
    Hope this helps

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Isolation level and performance impact?

    Hi
    I'm new to BDB JE and building some prototypes to evaluate it.
    Given a simple usecase of storing the following key/value pair <String,List<Event>> mapping a user to his/her list of events, in the db. New events are added for the user, this happens (although fairly rarely) concurrently.
    Using Serializable isolation will prevent any corruption to the list of events, since the events are effectively added serially to the user. I was wondering:
    1. if there are any lesser levels of isolation that would still be adequate
    2. using Serializable isolation, is there a performance impact on updating users non concurrently (ie there's no lock contention since for the majority of cases concurrent updates won't happen) vs the default isolation level?
    3. building on 2. is there performance impact (other than obtaining and releasing locks) on using transactions with X isolation during updates of existing entries if there are no lock contention (ie, no concurrent updates) vs not using transactions at all?
    Thanks!
    Peter

    Have you seen this section of the Getting Started Guide on isolation levels in JE? http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html
    Our default is Repeatable Read, and that could be sufficient for your application depending on your access patterns, and the semantic sense of the items in your list. I think you're saying that the data portion of a record is the list of events itself. With RepeatableRead, you'll always see only committed data, and retrieving that record from a JE database will always return a consistent view of a given list. See http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html#serializable for an explanation of what additional guarantee you get with Serializable.
    2. using Serializable isolation, is there a
    performance impact on updating users non concurrently
    (ie there's no lock contention since for the majority
    of cases concurrent updates won't happen) vs the
    default isolation level?Yes, there is an additional cost. When using Serializable isolation, additional locks are taken on adjacent data records. In addition to the cost of acquiring the lock (which would be low in a non-contention case), there may be additional I/O needed to fetch adjacent data records.
    3. building on 2. is there performance impact (other
    than obtaining and releasing locks) on using
    transactions with X isolation during updates of
    existing entries if there are no lock contention (ie,
    no concurrent updates) vs not using transactions at
    all? In (2) we compared the cost of Serializable to RepeatableRead. In (3), we're comparing the cost of non-transactional access to the default Repeatable Read transaction.
    Non-transactional is always a bit cheaper, even if there is no lock contention. On top of the cost of acquiring the locks, non-transactional operations use less memory and disk space, and execute some transaction setup and teardown code. If there are concurrent operations, even in there is no contention on a given lock, there could be some stress on the lock table latches and transaction tables. That said, if your application is I/O bound, the cpu differences between non-txnal and txnal operations becomes more of a secondary factor. If you're I/O bound, the memory and disk space overhead does matter, because the cache is more efficiently used with non-txnal operations.
    Regards,
    Linda
    >
    Thanks!
    Peter

  • Daq and sensors question

    Hi, I am working on a semester long project:
    build a data acquisition system, with the following sensors:
    RPM
    Torque
    Strain of material
    Temperature
    Noise
    Vibration
    I am starting from scratch. I have experience in digital design, some embedded system stuff, and the usual - C/C++.
    I don't have much experience in DAQs and have no idea whats out there
    and how to start on it. I have a limited budget of lets say 250$. The
    system needs to be portable as well, as it will be put on a moving
    object. I do have access to Labview through school.
    What I am looking for is some pointers and help on getting started,
    info about sensors, about DAQs, and all that. So any info is much
    appreciated. I'm not even sure if I'm posting in the right forum.
    thank you
    -Mike T,

    Can you tell us what the unit under test is?  The RPM sensor may be a once per revolution pulse.
    PCB, www.pcb.com, has some nice vibration sensors
    Thermocouples are inexpensive, and strain sensors may be a bit more expensive.
    For portable, you may be interested in USB DAQ devices.  We have several that will work with LabVIEW
    Which University are you at?  There are some advance signal processing libraries that might be available to you for analyzing and displaying the data.
    Is there a traditional instrument that has been used in the past?
    Preston Johnson
    Principal Sales Engineer
    Condition Monitoring Systems
    Vibration Analyst III - www.vibinst.org, www.mobiusinstitute.com
    National Instruments
    [email protected]
    www.ni.com/mcm
    www.ni.com/soundandvibration
    www.ni.com/biganalogdata
    512-683-5444

  • DAQ and timed loop

    Hi,
    I have a question about DAQ and timed loop. I used timed loop while acquiring data. I need timed loop since I have tvo more loops in my application and ı need give some priority to them.
    Data acquisition should have a high priority. But the example codes about DAQ always use wlile loop. Is this wrong to use timed loop in DAQ application, or there is a unexpected result about
    this usage.

    You could use timed loops in Data acquisition operations as well
    But, one thing you will have to watch is the 'number of samples per channel' terminal of DAQmx Read function.
    Suppose you have rate as 1000 samples /sec, in your DAQmx timing vi
    In continuous acquisition, if you specify number of samples per channel as 500, instead of performing 2 iterations / sec to get your 1000 samples as your nor mal while loop would, your Timed loop will run for 1 sec and you will get an error that all samples could not be acquired

  • NI-DAQ and tcl/tk?

    Has anyone had any luck using tcl/tk as the GUI of an
    application that uses the NI-DAQ libraries? I'd like to
    know what you did to bring the two together (the basics).
    Thanks
    Patrick Cevasco

    Patrick Cevasco wrote:
    >
    >Has anyone had any luck using tcl/tk as the GUI of an
    >application that uses the NI-DAQ libraries? I'd like to
    >know what you did to bring the two together (the basics).
    Hi Patrick,
    Not in Tcl/Tk, in Java. The idea will probably be the same,
    however:
    The C part:
    Once you have Ni-Daq installed (verify this by using the NiDaq
    test panel)), there is a header file, \include\nidaq.h,
    which gives access to all NiDaq functions. Include this file.
    When compiling, also use the library \lib\nidaq32.lib
    Now you only need to access C from Tcl. As far as I know,
    that's quite easy. In Java, I had to make an intermediate DLL,
    which can be accessed by the java-native interface, and which
    passes all
    calls to the NiDaq functions.
    To make the required DLL, I used the following line: (MSVC,
    compiled from the command line)
    "cl -Ic:\ni-daq\include -Ic:\java\jdk\include -Ic:\java\jdk\include\win32
    c:\ni-daq\lib\nidaq32.lib -LD ptt_util_NiDaqLib.c -FeNiDaqLib.dll"
    Hope it helps,
    Walter van Iterson

  • The connection between your access point, router, or cable modem and the Internet is broken.

    At random points throughout my laptop usage, I randomly disconnect from the internet and receive these 2 messages:
    1. The connection between your access point, router, or cable modem and the Internet is broken.
    2. The default gateway is not available.
    It fixes the issue for a short period of time, but constantly happens. This is really starting to annoy me. I have no problems on any other internet, and other people i live with are connected with no issues to the internet. I have tried a number of solutions
    online that has "worked" for others to no success such as resetting the router, fooling around with command prompt, and changing the performance settings to not drop internet when low on power. Does anyone have a solution for my technical problems?
    I am running Windows 8.1 on a Lenovo ideapad that is less than a year old. I will be honest I am not a very knowledgeable person when it comes to computers, so please use simplified explanations.

    Checked this ? 
    http://answers.microsoft.com/en-us/windows/forum/windows_7-networking/window-networking-troubleshoot-error-the/c130ede9-757e-4224-9ae8-3cceb82eeb74
    Arnav Sharma | http://arnavsharma.net/ Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.

  • Mail has 16k  messages, and performance is very slow, with loading times taking up to 5 seconds every time I open Mail. How can I increase performance?

    Mail has 16k  messages, and performance is very slow, with loading times taking up to 5 seconds every time I open Mail.
    How can I increase performance?
    I'm running a MacBook Air 4GB 1.7GHz  10.7.2.
    Graham

    One possible solution would be to organise your inbox into folders.
    Its never relly good on any system to have one folder that has everything in it.
    Try going to you web gui for that mail account and organise your folders and move mails from your inbox into corresponding folders for better organisation.
    Several folders containing the same amount of one folder will usually load a little quicker as the folder may not be accessed to download its content unless veiwed.
    So having 10 folders with organised content, and you inbox as an area thats to hold only new emails would work much much quicker with imap.
    Most imap servers will only update the contents of a folder when its veiwed.

  • I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?

    I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?

    Hi,
    thank you, it’s done! The Chat helped me! Now I cannot go into this forum anymore with my old password, but that’s not a big problem.
    Best wishes,
    Christian
    Von: JimHess <[email protected]<mailto:[email protected]>>
    Antworten an: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
    Datum: Montag, 20. April 2015 18:38
    An: Christian Kogler <[email protected]<mailto:[email protected]>>
    Betreff:  I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?
    I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?
    created by JimHess<https://forums.adobe.com/people/JimHess> in Photoshop Lightroom - View the full discussion<https://forums.adobe.com/message/7454915#7454915>

Maybe you are looking for

  • How do you import photos from a library on an external drive to the current library running in iPhoto?

    How do you import photos from a library on an external drive to the current library running in iPhoto? I have a few different pictures in each library and would like to have all my pictures in both libraries. The library on the external drive is my b

  • Incoming fax storage on HP laserjet 1536dnf MFP

    I can't find where my incoming faxes are being stored on my computer.  I have set the faxes to be sent to computer, but I don't know where they are.

  • Question about using a PHP include for the head info...

    Hi, I'm currently using PHP includes for my site header and footer info. All my js/css files and meta declarations are in the head portion, which is in the header include. How would I go about giving each page its own individual TITLE if the modifiab

  • TRansfer Order - Negative stock figures

    Dear Friends: Using T.code LT03, I entered Warehouse no,plant,delivery and pressed enter In storage, I selected 003(open storage) and selected post In movement data, I entered the data source storage type: 003 (open storage) source storage section -

  • Starting a Home Server

    I am wanting to set up a home server (at my house) using a mac mini and an external hard drive array. I want to use the home server for a lot of different things including but not limited to: #1. Backing up computers to the server. #2. Using the serv