Cond Retrieve Trigger only logs data around set point?

I need to acquire analog data after a set point level is reached until the signal falls back below that set point. The max level will be realtively high compared to the set point. I only seem to be logging data around a very narrow band relative to the set point. The vi is attahced. Thanks!
Attachments:
Force_Monitor3.vi ‏103 KB

Well, that still didn't include your trigger level VI. No matter. First, upon taking a further look at your loop, you start a buffered continuous analog acquisition, but you don't have anything wired to the "number of scans to read" input of your AI read VI. The default for this input is -1, which means you read the entire buffer. This is not the setting you want when doing a continuous acquisition; you may get overrun errors doing this. You want to read a constant amount of data out of the buffer at a time, and it has to be less than your buffer size.
Second, this is not a limitation of your DAQ board. As long as your DAQ board can do buffered analog input, the triggering you are using is a software trigger, which is NOT a function o
f the DAQ board. The AI Read VI reads the data out of the buffer, and then based on the trigger specifications, determines whether or not this data should be passed out of the VI.
Since your arrays are not particularly large, I think it best to do it a different way. For example, the way you are doing it, the AI Read VI will return part of the buffer when the trigger occurs, but then stop because the next reading from memory that it does may not satisfy trigger conditions (no "edge", rising or falling).
Therefore, I propose that you get EVERY piece of data out of the AI Read VI (no trigger at all), and search through a particular channel's data for your trigger level. The first value that is above the trigger, get the rest of the data from that array. Then search the next arrays that come from AI Read VI until the value goes below the trigger value, and stop the acquisition when this occurs.
I can show you how to do that. It's not that difficult.
Mark

Similar Messages

  • I am using a Conditiona​l Retrieval Trigger but only recording data around a very narrow band relative to the set point. Any thoughts? Thanks

    Since I am pretty new to LabView, I have tried to build this with examples given and have things running my way except for this trigger problem. I would like to have my acquisition channel serve as my trigger channel and only log data after this point is reached and stop after the level goes back under the set point.
    Attachments:
    Force_Monitor.vi ‏92 KB

    I apologize, but I did not realize what hardware you were using previously. The 6024E card does not have analog triggers. It only supports digital triggering. Therefore, you will be unable to configure an analog start and stop trigger.
    In order to configure a digital start and stop trigger, you may want to look at the following example program.
    Continuously Acquiring Analog Signals Using a Digital Start and Stop Trigger in LabVIEW
    http://venus.ni.com/stage/we/niepd_web_display.DIS​PLAY_EPD4?p_guid=B45EACE3ED3556A4E034080020E74861&​p_node=DZ52308&p_submitted=N&p_rank=&p_answer=&p_s​ource=Internal
    The work-around since your board does not support analog triggering is to use conditional retrieval, which implements a circular buffer in c
    omputer memory and checks each point as it is acquired to see if it matches the trigger conditions. When the trigger conditions are met LabVIEW reads the desired data from the buffer. Since the program you are pointing me to uses conditional retrieval, I will assume you already knew of this work-around. Basically you will no longer be performing a hardware trigger. Instead you will just be checking each input value to see if it falls in an acceptable range. Unfortunately, the VI you attached to your post is missing subVIs. I cannot open your attachment and run it with the broken subVIs. You will need to save your application as a library (llb) in order for me to view it.
    Regards,
    Justin

  • Logging data after a trigger with Lookout Direct

    Hi,
    I would like to log data with Lookout Direct after a trigger input (not periodically or continuously).  I am using Lookout DIrect version 4.5.1 build 19 and a Direct Logic 250 PLC.
    Does anyone have suggestions on how to do this?  I would prefer that this data be logged into Excel.  I have 24 sensors connected to the PLC and each time a sensor transitions from hi to low I would like to log the time of transition. 
    Currently I have individual spreadsheet storage objects for each sensor as well as individual latchgates to indicate when logging has been completed.  In the PLC code there is a state machine for each individual sensor as well and one of the states waits for an acknowledgement from Lookout that data logging has been completed before moving on to the next state (I will have to dig a bit deeper to remember exactly why I needed to do that).
    I am hoping there is a more traditional approach that is easier than what I am doing.  One of the problems I have been facing is Lookout Direct crashing every few days and I suspect it is because the sensors are often sensed within milliseconds of each other and opening/closing so many files is causing problems.  I worked through a list of possible reasons that Lookout may be crashing (provided by tech support) and I am nearly convinced I am just asking too much of the program...
    Any help will be greatly appreciated
    Thank you,
    David

    In case someone can help with this, here is a bit more information about my application and the PLC/Lookout code I have developed:
    Actuators have two positions, nominal and folded.  Prox sensors are used to monitor the position of actuators.  12 actuators can be monitored simultaneously.  The time at which prox sensors are sensed high is recorded so that actuator speed and actuation success is logged. 
    The PLC code consists of 12 separate state machines, each with the following 8 states:
    State 1
      System reset or nominal timer reset
      Wait for nominal prox release
    State 2
      Nominal prox released
      Wait for folded prox sense
    State 3
      Folded prox sensed
      Wait for folded ack from Lookout (acknowledges that timer value has been logged)
    State 4
      Folded ack from Lookout received
      Wait for folded timer reset (this state is active for one scan only)
    State 5
      Fold-in timer reset
      Wait for folded prox release
    State 6
      Folded prox released
      Wait for nominal prox sense
    State 7
      Nominal prox sensed
      Wait for nominal ack from Lookout (acknowledges that timer value has been logged)
    State 8
      Nominal ack received from Lookout
      Wait for nominal timer reset (this state is active for one scan only)
    Lookout acknowledges that timer values have been read and saved with LatchGates.  Lookout uses a SpreadSheet Storage Object to save the PLC timers when the PLC enters states 3 or 7 (prox sensor triggered).  The logged member of the SpreadSheet Storage Object is used to change the state of the LatchGates which, in turn, signal the PLC to proceed to the next state.  When the folded file is saved, the nominal LatchGate is turned off and the folded LatchGate is turned on and vice-versa for the nominal file.
    The LatchGate/SpreadSheet Storage combination is what I am hoping to improve upon.  I believe Lookout is crashing when 12 SpreadSheet Storage Objects log to 12 different files during the same 1 second period of time.
    If anyone has suggestions of a way to log this data in the PLC memory or a software package better suited for this application, please let me know!  I believe this would be simple with LabVIEW, unfortunately obtaining the additional hardware and software that I would need hasn't been easy! 

  • Crystal Report can only retrieve 1 row of data

    We are using the following configuration:
    - Microsoft Windows 2003 Server R2 for x64
    - Microsoft .Net Framework 3.0 for x64
    - Oracle Client 10g 10.2.0 for x64, which inlcudes Oracle Provider for OLE DB 10.2.0.2.21
    - Crystal Report 10 redistributable for x64, from Visual Studio 2005
    With that configuration, each time we run a report in ASP.Net, we can only retrieve 1 row of data, even though the data is more than that. We have no issue running the same report in 32 bit (same configuration as above, but 32bit).
    For the meantime, we use ODBC connection, and it runs well. But, ODBC is slow and since the system that we build is still under heavy testing, if any problem occurs during report generation, it locks up other reports and we have to restart the IIS. So, ODBC connection is definitely a no-no.
    Please help.

    I have the same problem which is assigned uder bug:6623430
    Still waiting for fix...

  • I am trying to log data from 4 voltage input signal using labview, but when i use DAQ assistant i am able to log data from one signal at a time only.

    I am trying to log data from 4 voltage input signal using labview, but when i use DAQ assiatant i am able to log data from one signal at a time only.I am trying to get all 4 input data to logged in a single file againt time. I am new to Labview, I need to sample this data within a couple of days can someone help please.

    Naveen
    Check out the info in the Analog Input section of the document linked below.  (Ignore the part about Global Channels.)  In Figure 5, notice that you can select multiple channels while holding <Ctrl> or <Shift>.
    Developer Zone Tutorial: NI-DAQmx Express VI Tutorial
    Kyle B  |  Product Support Engineer  |  ni.com/support

  • Logging data only when value of tag increases

    Hello,
    I want to data log only when my tag value is increases another it will not log data in database.
    So how do I do this.
    Thanks & Regard
    Nitin Jain

    A functional global, which will look at the current value, compare it to the previous, and if the new value is greater, signal that fact, and replace the previous value with the new value. You probably want the comparison to have a range, or a threshhold, so that it has to have increased by a certain amount before this all occurs, also are you only concerned in increasing? If you signal were a slow sinusoidal one you would register increases all the way up one side of the waveform, then once the peak was reached no more data would be saved, unless of course the amplitude increased. Do a search on functional globals, Action Engines, to see what I'm refering to as a storage mechanism.
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • HT1695 I have a US txt only plan. What setting should I have my iPhone 5 on so I do not get charged cellular or data or roaming charges?

    I have a US txt only plan. What setting should I have my iPhone 5 on so I do not get charged cellular or data or roaming charges?

    The main ones are:
    Settings > General > Cellular > Data Roaming (should be "Off")
    Settings > General > Cellular > Cellular Data (sounds like you want this "Off")

  • DSC 8.6.1 wrong timestamps for logged data with Intel dual core

    Problem Description :
    Our LV/DCS 8.6.1 application uses shared variables to log data to Citadel. It is running on many similar computers at many companies just fine, but on one particular Intel Dual Core computer, the data in the Citadel db has strange shifting timestamps. Changing bios to startup using single cpu fixes the problem. Could possibly set only certain NI process(es) to single-cpu instead (but which?). The old DSCEngine.exe in LV/DSC 7 had to be run single-cpu... hadn't these kind of issues been fixed by LV 8.6.1 yet?? What about LV 2009, anybody know?? Or is it a problem in the OS or hardware, below the NI line??
    This seems similar to an old issue with time synch server problems for AMD processors (Knowledge Base Document ID 4BFBEIQA):
    http://digital.ni.com/public.nsf/allkb/1EFFBED34FFE66C2862573D30073C329 
    Computer info:
    - Dell desktop
    - Win XP Pro sp3
    - 2 G RAM
    - 1.58 GHz Core 2 Duo
    - LV/DSC 8.6.1 (Pro dev)
    - DAQmx, standard instrument control device drivers, serial i/o
    (Nothing else installed; OS and LV/DSC were re-installed to try to fix the problem, no luck)
    Details: 
    A test logged data at 1 Hz, with these results: for 10-30 seconds or so, the timestamps were correct. Then, the timestamps were compressed/shifted, with multiple points each second. At perfectly regular 1-minute intervals, the timestamps would be correct again. This pattern repeats, and when the data is graphed, it looks like regular 1-sec interval points, then more dense points, then no points until the next minute (not ON the minute, e.g.12:35:00, but after a minute, e.g.12:35:24, 12:36:24, 12:37:24...). Occasionally (but rarely), restarting the PC would produce accurate timestamps for several minutes running, but then the pattern would reappear in the middle of logging, no changes made. 
    Test info: 
    - shared variable configured with logging enabled
    - data changing by much more than the deadband
    - new value written by Datasocket Write at a steady 1 Hz
    - historic data retrieved by Read Traces
    - Distributed System Manager shows correct and changing values continuously as they are written

    Meg K. B. , 
    It sounds like you are experiencing Time Stamp Counter (TSC) Drift as mentioned in the KB's for the AMD Multi-Core processors. However, according to this wikipedia article on TSC's, the Intel Core 2 Duo's "time-stamp counter increments at a constant rate.......Constant TSC behavior ensures that the duration of each clock tick is
    uniform and supports the use of the TSC as a wall clock timer even if
    the processor core changes frequency." This seems to suggest that it would be not be the case that you are seeing the issue mentioned in the KBs.
    Can you provide the exact modle of the Core 2 Duo processor that you are using?
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • Continuous data capture, single point logging

    Hi,
    I want to execute a seemingly simple task in Signal Express, but can't find how to do it. I should add that I have only a little experience in Signal Express.
    I wish to view some acquired data in real time (which I am succesfully doing) but only log a single instance when I judge the conditions to be right and I press a key or otherwise manually trigger the system.
    So far all the options I have found dictate continuous logging.
    I'm using Signal Express version 3.0.
    Any help would be greatfully appreciated.
    Bandit.

    Hi ShalimarA. Thanks for the pointer to the example, it is available under SE 3.0 and I have investigated it.
    As provided, once I change the data channels to link to my device, it works as I expect. Logged data is in columns in the order aquired...
    ai0, ai1, ai2, ai3 
    But as you know from earlier, I wish to continually monitor on screen, and just log a single data set when something of interest occurs.
    So I amended start and stop conditions in the recording options window.
            Start on software trigger (A), repeat 9999 times, log to current log.
            Stop on software trigger (A).
    Now I get the functionality I require, log only 1 data set when I manually click the button, but again the columns are out of order. This time they log in the order...
    ai1, ai2, ai0, ai3 
    Perhaps from these steps you can replicate the problem.
    I think it's a problem with either
             The way I understand the software triggered logging.
             The actual Signal Express software triggered logging.
    Regards,
    Bandit.

  • SAP Business Workplace - No log data exists message

    Hi,
    We have the work items of EDI 810 configured to reach the workflow inbox of certain users.  The users have 300+ items in their workflow inbox - but when they click the Workflow Inbox they get a message - "No log data exists".
    The message is an error type: Message BL 223
    The message is coming from the function module BAL_CNTL_REFRESH.
    We tried to get the same workflow positions assigned to our user id and the unprocessed 300 + items came to our inbox and we are able to view and process the work items without any issue.
    The issue pertains only to the two users. It seems like it has something to do with the filter / layout settings set for the two users alone.  Could you please advice.
    Regards,
    Prabaharan

    Hi
    i am using SAP GUI at client place via Citrix.
    it was working fine till yesterday
    pls suggest. wat could be other possible reason
    thanks

  • How to only migrate data from SQL Server 2008 to Oracle 11?

    According to our requirement, We need to only migrate data from a SQL Server database to an existed
    Oracle database user.
    1) I tried to do it with SQL Developer 3.0.04 Migration Wizard, But find an issue.
    My SQL Server database name is SCDS41P2, and my Oracle database user name is CDS41P2;
    When I used SQL Developer to do offline move data by Migration Wizard, I found all oracle user
    name in movedata files which gotten by run Migration Wizard
    is dbo_SCDS41P2. If the Oracle user name is not the same as my existed Oracle user name,
    the data can't be moved to my existed Oracle user when I run oracle_ctl.bat in command line window.
    So I had to modify the Oracle user name in all movedata files, but it's difficult to modify them because there are many tables in
    databases. So could you please tell me how to get the movedata files which the oracle user name in them is my
    expected Oracle user name?
    2) I also tried to use the 'copy to Oracle' function to copy the SQL Server database tables data
    to the existed Oracle database user. When clicked 'copy to Oracle', I selected 'Include Data' and 'Replace' option
    But I found some tables can't be copied, the error info is as below:
    Table SPSSCMOR_CONTROLTABLE Failed. Message: ORA-00955: name is already used by an existing object
    Could you please tell me how to deal with this kind of error?
    Thanks!
    Edited by: 870587 on Jul 6, 2011 2:57 AM

    Hi,
    Thanks for you replying. But the 'copy to oracle' function still can't be work well. I will give some info about the table. I also search 'SPSSCMOR_CONTROLTABLE' in the target schema, and only find one object. So why say 'name is already used by an existing object'? Could you please give me some advice? Thanks!
    What is the 'Build' version of your SQL*Developer ?
    [Answer]:
    3.0.04
    - what does describe show for the SPSSCMOR_CONTROLTABLE in SQL*Server ?
    [Answer]:
    USE [SCDS41P2]
    GO
    /****** Object: Table [dbo].[SPSSCMOR_CONTROLTABLE] Script Date: 07/18/2011 01:25:05 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[SPSSCMOR_CONTROLTABLE](
         [tablename] [nvarchar](128) NOT NULL,
    PRIMARY KEY CLUSTERED
         [tablename] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    - what does describe show for the SPSSCMOR_CONTROLTABLE in Oracle ?
    [Answer]:
    -- File created - Monday-July-18-2011
    -- DDL for Table SPSSCMOR_CONTROLTABLE
    CREATE TABLE "CDS41P2"."SPSSCMOR_CONTROLTABLE"
    (     "TABLENAME" NVARCHAR2(128)
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS" ;
    -- DDL for Index SYS_C009547
    CREATE UNIQUE INDEX "CDS41P2"."SYS_C009547" ON "CDS41P2"."SPSSCMOR_CONTROLTABLE" ("TABLENAME")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS" ;
    -- Constraints for Table SPSSCMOR_CONTROLTABLE
    ALTER TABLE "CDS41P2"."SPSSCMOR_CONTROLTABLE" MODIFY ("TABLENAME" NOT NULL ENABLE);
    ALTER TABLE "CDS41P2"."SPSSCMOR_CONTROLTABLE" ADD PRIMARY KEY ("TABLENAME")
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS" ENABLE;
    Edited by: 870587 on Jul 18, 2011 1:42 AM

  • Exchange 2013 - Admins can only log into one ECP server

    Simple run down of my environment:
    • Two AD sites (Site1 and Site2)
    • One 2008 R2 domain controller at each site (DC1 w/ FSMOs and DC2) running AD forest function level 2008 R2
    • One 2012 Std server with Exchange 2012 Std CU3 at each site (EX1 and EX2) – CAS/MBX on both, No DAG
    • Half of mailboxes on each server
    Using https://webmail.domain.com for all of our internal and external virtual directories. Adding /OWA or /ECP will get you into the respective site from either internal or external.
    All Domain Admin/Exchange Admin (Organization Management) do NOT have mailboxes. Those individuals can log into ECP from https://webmail.domain.com/ecp, or FQDN or IP of EX2 only. If John Smith has no mailbox then he must use:
    https://webmail.domain.com/ecp
    or
    https://ex2.domain.com/ecp
    or
    https://10.16.109.31/ecp
    If EX2 goes offline or they use https://ex1.domain.com/ecp or https://10.16.108.31/ecp then none of the admins can login and they get the following:
    Use the following link to open this mailbox with the best performance:
    https://webmail.domain.com/owa/auth.owa
    X-FEServer: EX2
    Date: 2/18/2014 10:37:42 PM
    s ECP or the https://webamil.domain.com/ecps ECP.
    They can access EMS from either server and run Get-ECPVirtualDirectory and it shows what we would expect:
    – https://webmail.domain.com/ecp
    – https://webmail.domain.com/ecp
    Why can an admin with no mailbox only log into the ECP on EX2? What is forcing the login to EX2 only? How can I move that “forced login” to EX1 if we ever get into a situation where EX2 is having problems? What happened in Active Directory or Exchange that
    made EX2 the primary login server for ECP and OWA? My FSMOs are located at Site 1…the same location where EX1 is located. The same server that I cannot log into ECP directly.
    ~Rick

    Any admin that is mail enabled can then only log into the server that hosts his mailbox. Admins without mailboxes can only log into EX2. And again, all admins can log into https://webmail.domain.com/ecp unless one of the servers goes offline
    and that's when the problems occur. Nothing obvious or unusual regarding mail flow. Done various internal and external tests and have not seen anything obvious or in the logs.
    Yesterday's change that MS Support had me do was to delete the ExternalURL for the ECP virtual directories. No difference. So, my latest update from MS Support that requested me to try today's change...
    set-owavirtualdirectory “owa (default web site)” -RedirectToOptimalOWAServer $false
    After performing an IIS reset and making sure replication had completed there was no difference. So, I am now forced to wait till Monday for MS to respond if the pattern stays at one email per day.
    Has anyone with multiple servers at different AD sites been able to log directly into the either server like I'm trying? I get this problem in my labs and I even had MS, while on the phone, remote in to make sure I was setting it up properly.
    The guy on the phone never said if their labs do the same thing cause they don't have multiple AD sites in their labs. In my lab if I have two servers at each site then I can log into both servers at the site, but not the other site. It appears it becomes
    site dependent then.
    MS has taken numerous logs and they are acting like this is the first time they've seen this. Yet I can reproduce it with no problem time after time. I'll create new VMs and start all over from scratch and make this happen every time I create a new AD/Exchange
    environment (it does take me a while to build all those VMs from scratch). No fancy GPOs to AD and no radical changes to the Exchange servers. Other than obvious config changes to make sure email can flow internally and externally, this is pretty much out
    of the box.
    ~Rick

  • How to delete Change log data from a DSO?

    Hello Experts,
    I am trying to delete the change log data for a DSO which has some 80 Cr records in it.
    I am trying to follow the standard procedure by using the process chain variant and giving the no.of days but somehow data is not getting deleted.
    However the process chain is completing successfully with G state.
    Please let me know if there are any other ways to delete the data.
    Thanks in Advance.
    Thanks & Regards,
    Anil.

    Hi,
    Then there might something wrong at your Chang log deletion variant.
    can you recreate changlog deletion variants and set them again.
    Ty to check below settings with new variant.
    Red mark - won't select
    Provide dso name and info area, older than and select blue mark.
    blue mark - it will delete only successfully loaded request which are older than N days.
    Have you tested this process type changlog deletion before moving to prod per your data flow?
    Thanks

  • How to determine Default Table Logging (log data changes )

    Does anyone know how to view exactly what tables and related data fields have change logging enabled by default? I know that some of the standard reports will produce "edit reports" show who changed what field, when ,old and new values, etc, but I don't know how to determine where the data is retrieved from.
    For example: If I look in the ABAP Dictionary at table LFA1, technical settings, it shows that log data changes is not "checked" or enabled. But if I run the standard AR Master Data Change Report, I get output showing valid field changes.
    I have seen other threads that refer to SCU3 but I can't determine the above this from report.
    Any assistance would be greatly appreciated.

    Hi Arthur,
    As far as I am aware, these are 2 different things. 
    There is table logging which is at the table level & if activated (i.e. it's listed in table DD0LV, PROTOKOLL=X and the table logging parameter is set in the system profile/s).
    The second one is programatical logging for change documents when data is maintained though a program that has been written to include a log.  I'm not sure how to identify a complete lit of these though unfortunately.
    Hope that is of some assistance.

  • Sending log data to two different files using log4j

    Hi,
    Can some one please help me with my problem I have here?
    I want to send log data to two diffrent files depending on the logging level such as DEBUG and WARN.
    How can you configure this in log4j.properties.
    Please post sample code for log4j.properties to achieve this.
    Thanks in advance.
    rsreddych

    Hi,
    Finally, I found the solution to this problem my self.
    What you need to do is define two loggers in the application, and set two priority levels to these loggers and define two out put files to these loggers. Deploy the war file, restart application server and you are good to go.
    This seems to be working for me. Only glitch I found is, the out put in
    the second file is displaying one space character at the start of line starting from second line (First line don't have this problem). This is odd. It may be because of my faulty code. Any how thanks for you all.
    rsreddych

Maybe you are looking for