Automatic data collection generating multiple logs

Hi,
I am working with a PCI-6251 card connected to a DAQmx board. I am running the newest version of Labview signal express. I am wondering how to create a function that will record a signal with a trigger to start the recording and stop recording after a certain amount of time then start recording in a new log automatically when it is triggered. Under recording options I was able to find a start and stop condition but for what ever reason having a start condition as trigger is not working for me. Any ideas why?
Also I have use the trigger function in the normal step setup and it works fine. 
Thank you 
Peter

Hi Peter,
What kind of signal are you reading?  What exactly are the settings of your start condition?  Can you attach a screen shot?
These links may help:
Signal Express 2013 - Trigger
Signal Express 2013 - Start Conditions Page
Jeff Munn
Applications Engineer
National Instruments

Similar Messages

  • Automatic data collection evaluation issue

    We are currently evaluating several automatic data collection applications that we are planning to integrate with our Oracle Applications --- got most of the RFQ constructed using this template, http://www.highjumpsoftware.com/adc/Vertexera062503/
    but we want to make sure that all the system integration issues have been accounted for.
    Does anyone have any success stories/horror stories about integrating automatic data collection products with the ERP system
    Thank you!

    cfhttp is a good place to start.

  • Multiple parametric Data Collection using createMultipleParametricData API

    Dear Experts,
    For the first time we are using the SAP ME PAPI functionality in SAP MII. As per the requirement we need to do the data collection against a SFC from SAP MII Transaction. We are successfully able to do the data collection  for single parametric Measure using the SAP ME PAPI createParametricData. But now we need to do the data collection for multiple parametric measures in one call and for that we can see that the createMultipleParametricData API is available. The sample xml file from which we are getting the data for three parametric measures Temprature, Pressure and Volume is pasted below.
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <Rowsets CachedTime="" DateCreated="2014-02-10T06:52:01" EndDate="2014-02-10T06:52:01" StartDate="2014-02-10T05:52:01" Version="14.0 SP4 Patch 1 (Jan 8, 2014)">
        <Rowset>
            <Columns>
                <Column Description="Temperature" MaxRange="1" MinRange="0" Name="Temperature" SQLDataType="-9" SourceColumn="Temperature"/>
                <Column Description="Pressure" MaxRange="1" MinRange="0" Name="Pressure" SQLDataType="-9" SourceColumn="Pressure"/>  
                <Column Description="Volume" MaxRange="1" MinRange="0" Name="Volume" SQLDataType="-9" SourceColumn="Volume"/>
    </Columns>
            <Row>
                <Temperature>32</Temperature>
                <Pressure>30</Pressure>
                <Volume>40</Volume>
            </Row>       
        </Rowset>
    </Rowsets>
    This is the input source from which we need to extract tag names and values under node-Row and then do Data Collection against an SFC with the help of Public API createMultipleParametricData.
    But we are not able to assign multiple parameters as request parameters to this API.
    Can anybody help us with what input format is required to assign multiple parameters to this API or to assign multiple parameters at a time?
    It would be great if you can paste the request xml stucture also.
    Thanks in advance,
    Sanjeev Sharma

    If your input stream is coming into a 1D array, you might try using the
    Decimate 1D array function in the array control group.
    Pull down the control until it matches the 8 data sets... (or 9 if there is a framing character of some sort in the data stream)...then each set should come out a separate output of hte block.
    You probably need to be careful that the serial stream always appears in the same order for this to work right.
    Hope that helps.
    The Hummer

  • Console generating huge logs

    Console is generating multiple logs ranging from 1 to 12 gigabytes on my childrens' user log-ins. This is leading to "startup disk full" errors.
    1) Why?
    2) Can I stop it?
    3) Does this indicate another problem?

    Hi Troy, runaway logs are a fairly frequent cause of disappearing disk space but to stop the process you need to identify the problem...
    1) Why? Because there is a problem somewhere that is endlessly writing to the log.
    2) Can I stop it? Yes, by identifying the problem and sorting it out
    3) Does this indicate another problem? Yes, see above
    I'm sorry is the above seems facetious but without knowing what the runaway process is there is not much anyone can advise.
    I am no expert at reading error logs but you need to read through the most recent entry and see if you can get some kind of a clue from it. There should be a recurring theme that you can identify. You could post the most recent entry here if it isn't too long but please don't post 1-12 GBs
    You could try the following:
    Starting up in Safe Mode
    Safe Boot takes longer than normal startup
    What is Safe Boot, Safe Mode.
    Hopefully the errant process won't load in Safe Boot so it might be easier to identify, plus the Disk Repair utility of Safe Boot may fix the problem. Ultimately though it would be better to know what the problem is beforehand in case it recurs.
    Good luck,
    Adrian

  • Log NC based on data collection

    Is it possible to trigger the logging of an NC based on a data collection value being outside the acceptable range?
    ie. Acceptable range for the data collection is a number less than 6, if the user enters 7 I would like to log an NC that says the data collection is out or range.

    To summarize:
    What I'm taking away from this is that it is the best practice to have only one parameter per DC group if you intend to trigger the automatic logging of an NC when that group "fails." The one parameter in the DC group MUST have a min/max value assigned and a fail is triggered when the operator enters a value outside of that range.  The NC is logged using the value assigned to the LOGNC_ID_ON_GROUP_FAILURE parameter in activity maintenance.
    If there are multiple parameters in the DC group, they all have to have a min/max value assigned and ALL of the responses have to be out of range in order to fail the SFC.
    I cannot have a DC group that contains parameters of multiple types and expect an NC to be logged based on an incorrect answer (for one question or multiple.)
    I cannot expect an NC to be logged based on an incorrect answer of one question, if the rest of the questions in the DC group are answered "correctly."
    Sound correct?
    Edited by: Allison Davidson on Apr 18, 2011 10:06 AM  - typo

  • Data collection was switched from an AI Config task writing to an hsdl file to synchronized DAQmx tasks logging to TDMS files. Why are different readings produced for the same test?

    A software application was developed to collect and process readings from capacitance sensors and a tachometer in a running spin rig. The sensors were connected to an Aerogate Model HP-04 H1 Band Preamp connected to an NI PXI-6115. The sensors were read using AI Config and AI Start VIs. The data was saved to a file using hsdlConfig and hsdlFileWriter VIs. In order to add the capability of collecting synchronized data from two Eddy Current Position sensors in addition to the existing sensors, which will be connected to a BNC-2144 connected to an NI PXI-4495, the AI and HSDL VIs were replaced with DAQmx VIs logging to TDMS. When running identical tests, the new file format (TDMS) produces reads that are higher and inconsistent with the readings from the older file format (HSDL).
    The main VIs are SpinLab 2.4 and SpinLab 3.8 in folders "SpinLab old format" and "Spinlab 3.8" respectfully. SpinLab 3.8 requires the Sound and Vibration suite to run correctly, but it is used after the part that is causing the problem. The problem is occuring during data collection in the Logger segment of code or during processing in the Reader/Converter segment of code. I could send the readings from the identical tests if they would be helpful, but the data takes up approximately 500 MB.
    Attachments:
    SpinLab 3.8.zip ‏1509 KB
    SpinLab 2.4.zip ‏3753 KB
    SpinLab Screenshots.doc ‏795 KB

    First of all, how different is the data?  You say that the reads are higher and inconsistent.  How much higher?  Is every point inconsistent, or is it just parts of your file?  If it's just in parts of the file, does there seem to be a consistent pattern as to when the data is different?
    Secondly, here are a couple things to try:
    Currently, you are not calling DAQmx Stop Task outside of the loop; you're just calling DAQmx Clear Task.  This means that if there were any errors that occured in the logging thread, you might not be getting them (as DAQmx Clear Task clears outstanding errors within the task).  Add a DAQmx Stop Task before DAQmx Clear Task to make sure that you're not missing an error.
    Try "Log and Read" mode.  "Log and Read" is probably going to be fast enough for your application (as it's pretty fast), so you might just try it and see if you get any different result.  All that you would need to do is change the enum to "Log and Read", then add a DAQmx Read in the loop (you can just use Raw format since you don't care about the output).  I'd recommend that you read in even multiples of the sector size (normally 512) for optimal performance.  For example, your rate is 1MHz, perhaps read in sizes of 122880 samples per channel (something like 1/8 of the buffer size rounded down to the nearest multiple of 4096).  Note: This is a troubleshooting step to try and narrow down the problem.
    Finally, how confident are you in the results from the previous HSDL test?  Which readings make more sense?  I look forward to hearing more detail about how the data is inconsistent (all data, how different, any patterns).  As well, I'll be looking forward to hearing the result of test #2 above.
    Thanks,
    Andy McRorie
    NI R&D

  • Automatically generating multiple pdf reports from single(*.rpt) report file

    Post Author: msam
    CA Forum: General
    I would like to be able to automatically pass a parameter list to a single report, and have it automatically generate multiple reports (as saved pdf files), based on the parameter list.
    Is this possible?  If so, could someone point me to some documentation?  Thanks.

    What you probably need to do is generate each bio
    individually with the
    <cfdocument...> tag just the way you want them. And
    then use some of
    the advanced <cfpdf...> functionality that allows you
    to append two or
    more individual PDF's into a single large PDF.
    Here are some resources that describe some of the
    <cfpdf...> functionality.
    http://www.coldfusionjedi.com/index.cfm/2007/7/9/ColdFusion-8-Working-with-PDFs-Part-1
    http://www.coldfusionjedi.com/index.cfm/2007/7/10/ColdFusion-8-Working-with-PDFs-Part-2
    http://cfpdf.blogspot.com/
    http://cfpdf.blogspot.com/2007/06/cfpdf-action-merge_27.html
    http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=cfpdf_02.html

  • Can we run parallel data collections to collect multiple source instance.

    Hi,
    We have a business requirement where we will have multiple Source(ERP) instance and single Destination(ASCP) instance. We would like to check if we will be able to run Data collections in parallel to collect multiple EPR instance in the same time window. This is to reduce the total planning process time duration as following data collections we will be required to run multiple plan runs.
    Please help me with your expert comments
    Rgds
    Sid

    You may instead use Continuous collections to save on time so it can run collections from both instances periodically throughout the day thereby reducing a single timespan to collect all the data.
    Thanks
    Navneet Goel
    Inspirage

  • Log files/troubleshooting performance data collection

    Hello: 
    Trying to use MAP 9.0, 
    When doing performance data collection, getting errors.  Is there a log file or event log that captures why the errors are occurring?
    One posting said to look in bin\log - but there is no log directory under BIN for this version it seems.  
    Thank you, 
    Mustafa Hamid, System Center Consultant

    Hi Mark,
    There's no CLEANER_ADJUST_UTILIZATION in EnvironmentConfig for BDB JE 5.0.43 which I'm currently using, I also tried
       envConfig.setConfigParam("je.cleaner.adjustUtilization",
              "false");
    it fails to start up with below error
    Caused by: java.lang.IllegalArgumentException: je.cleaner.adjustUtilization is not a valid BDBJE environment parameter
        at com.sleepycat.je.dbi.DbConfigManager.setConfigParam(DbConfigManager.java:412) ~[je-5.0.43.jar:5.0.43]
        at com.sleepycat.je.EnvironmentConfig.setConfigParam(EnvironmentConfig.java:3153) ~[je-5.0.43.jar:5.0.43]

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Sluggish data collection will log but not plot

    Please be gentle with another newbie here. Unfortunately, I am stuck using LV6 on a Windows XP machine so I am limited in some of the options I have to control data logging and event structures. However, I have done the best I can for the application with what I have learned of LabView. I am trying to set up a multichannel, continuous (long-term) data collection from the serial port which will send the data to a chart and a log. I have tried to build my own event structure to tell it to collect faster if there is a change in the data value and to collect slower when the change in the data is minimal (based on the mean values).
    Any ideas on why this is running so sluggishly and not charting?
    Thanks for all input and help!!
    Attachments:
    4 Channel Temp Monitor_latest.vi ‏1170 KB

    Some things I see.
    1.  You are setting a lot of properties for the charts on every iteration, along with property nodes for some other controls.  Particularly the ones involving scaling.  These cause the UI interface to need to update often.  I would recommend only writing values to property nodes in the event that something changes.  If you can use an event structure, great.  If not, just compare the old value to the new value and only write out values if they are different.
    2.  I can't tell if anything controls the speed of you main while loop.  You might want to put in a small wait statement.
    3.  Don't open and close the serial port on every iteration.  You are actually doing it several times within an iteration.  Open it once, Read and Write to it in a loop. Close the port when the program ends after the loop.
    4.  Some of the stacked sequence structures seem suspect.  Some are using dequeues from the same queue in every frame, only to OR all the data together at the end.  It seems like a For Loop would be a better choice.
    5.  Do all your graphs need to be single representation?  Make them double.  You can also avoide the bullet conversion from double to single in your Scan from String functions if you wire a single representation constant into the type terminal of the Scan from String function.
    I'm sure there are more things that could be fixed, but I really suspect #1 and #2 as the main problems as to why your code seems sluggish.

  • Is it possible to do a multiple-records data merge that doesn't generate multiple text boxes?

    Is it possible to do a multiple-records data merge that doesn't generate multiple text boxes? And if so, how?
    For publications such as a directory with contact information, it would be easier to manage the layout by merging multiple records into one text box. However, it seems like the only option in InDesign is to merge the records into each of their own text boxes.

    No, but it's possible to stitch the frames together after the merge, then reflow.  See Adobe Community: Multiple record data merge into paragraph styles-applies the wrong style

  • Multiple Trigger Level Data Collection

    I'm having some difficulty with a unique data collection problem. I'm using DAQ Assistant to collect and display voltage data on a graph and in numeric indicators. I need to add functionality, so that when the user clicks a control, the incoming data is sampled and shown in a table. Each sample should occur at a succesively higher trigger level-- i.e., 1st sample when channel 0 is near 1 V, 2nd sample at 2 V etc... Is this possible using Labview 8.6? I have experimented with the Trigger and Gate function, but have been unable to trigger the manual trigger at successively higher levels. Any help or ideas would be appreciated! 

    Hi Sailorguy,
    Please have a look at this forum and see if it helps. Thanks!
    Ipshita C.
    National Instruments
    Applications Engineer

  • TYPING AUTOMATICALLY GENERATES MULTIPLE LETTERS AND ERASES BY ITSELF

    While texting my keyboard has started to generate multiple letters, periods after words, and to erase letters that I just typed.
    I need to retype everything.
    is it a bug, is it broken, do I have an incorrect setting?

    Try Pulling the Battery out without turning the phone OFF.
    Make sure its not connected to anything (cable, charger), Make sure its turned ON, then pull battery out.
    OR
    It sounds like a software fault. Even more so if nothing is behind the buttons to cause them getting stuck.
    You need to download the latest version for your device from the carrier the phone was sold with.
    Save the folder to your desktop, double click it to install it your computer. (can take 40 mins to download)
    Then you connect up the blackberry and using the program called application loader,(which is built into the desktop manager program that came on the cd) push the new software to the device. Ensure when you see an 'advanced' box, you click into it, to check both check boxes to remove all current installed software.
    Here is a list of software carrier sites:
    http://na.blackberry.com/eng/support/do ... _sites.jsp
    And Also
    Use BlackBerry Application Loader to install BlackBerry Device Software
    Good Luck and let us know how you go..
    9/10 times this is a software fault.
    If your issue is resolved, put a checkmark in the green box that contains the resolution.
    OR
    If it was just/or also really helpful - Give it a Kudos.. Go on Mate.. Help the rest of the clueless blackberry user world find their answer too..
    ~Gday from Down Under~

  • Site Web Analytics - no usage data being generated

    Hello all:
    I have a SharePoint Foundation 2013 farm with 2 WFE - 1 Search Server and 1 DB server.  Search Service Application has been configured and functioning properly.  Usage and health Data Service Application has been created and started.  Usage
    data collection is enabled and the "Analytics Usage" check box is checked.  Usage Data Import and Usage Data Processing timer jobs are scheduled and run successfully.
    But, I still get the following error when I go to the Site Web Analytics "A web analytics report is not available for this site. Usage processing may be disabled on this server or the
    usage data for this site has not been processed yet."
    After doing some research, some folks have suggested the following which has to do with manually enabling the receivers via powershell - which I have done but still no report and same error. 
    http://geekswithblogs.net/bjackett/archive/2013/08/26/powershell-script-to-workaround-no-data-in-sharepoint-2013-usage.aspx
    Other Internet searches indicate that Web Analytics Reports is no longer available in SharePoint Foundation 2013:
    http://blogs.msdn.com/b/chandru/archive/2013/08/31/sharepoint-2013-web-analytics-report-where-is-it.aspx
    http://sharepoint.stackexchange.com/questions/63099/where-is-the-web-analytics-service-in-sharepoint-2013
    There is also a TechNet question which indicate that "Microsoft Support confirmed me there's a bug in SharePoint Foundation 2013 in the Database that's going to be fixed in the June or August CU"
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/5372109c-8a6e-4d31-aa34-13b6cbde52cf/sharepoint-foundation-2013-web-analytics?forum=sharepointgeneral
    But, there is no resolution if this bug has been addressed or not.
    Therefore, I would really like to know what the deal is with this issue.  At the moment, I do not see any usage data being generated on any of the SharePoint Foundation servers in the farm.
    Please advise.
    Thank you,
    Rumi

    Hi Rumi,
    Find a same issue internaly which says that the links Site Web Analytics is no longer valid in SharePoint 2013 Foundation due to the changes in analytics service application architecture, so you may need the SharPoint enterprise edition for using
    this feature. 
    Symptom
    - Recently, we upgraded to SharePoint Foundation 2013 from WSS 3.0. In SharePoint Foundation 2013 sites, we see the option to click on Site Web Analytics reports but when we click on it, we get an error.
    - Clicking on Site Web Analytics reports from Site Settings \ Site Actions produces the error: “A web analytics report is not available for this site. Usage processing may be disabled on this server or the usage data for this site has not been processed yet.”
    - We have ensured we have logging enabled (multiple categories)
    - Example Site: http://sharepoint2/sites/IT/Projects/SAP/_layouts/15/usageDetails.aspx
    Cause
    By Design
    1) The links in Site Settings from a site collection are no longer valid in SharePoint 2013 (due to change in Analytics Service application architecture changes...part of Search Service now)
    2) SharePoint Foundation 2013 does not support Usage Reporting Analytics
    Resolution
    o Purchase a license for SharePoint Server 2013 Enterprise, and build out a farm for it (the Foundation SKU cannot be upgraded in-place to Server).
    o Once built up, you could copy your databases over and attach them to the Server farm and do your cutover.
    o Going forward from there, you would be able to have access to the Usage reports.
    Also as you have found that msdn blog with the explenation that it is not available in SPF 2013.
    http://blogs.msdn.com/b/chandru/archive/2013/08/31/sharepoint-2013-web-analytics-report-where-is-it.aspx
    http://technet.microsoft.com/en-us/library/jj819267.aspx#bkmk_FeaturesOnPremise
    Thanks,
    Daniel Yang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected].
    Daniel Yang
    TechNet Community Support

Maybe you are looking for