Storing by trigger with pre time and post time

 hi,
I am acquiring sample rate of 1000 samples/sec. i am using digital inputs for storing start and stop. when the digital input is high it should start storing and accordingly user has to specify the number of samples or time and data appended before storing starting. i.i pre time or pre samples has to be added before start storing the data. also for stop storing the data post time or samples has to be added.
how to do this..? can i have a sample vi?
Regards,
Balaji DP

may be its helpful to you,By using value signaling we can generate the event whenever true occur in Digital input channel (daq),in that particula event case we can append the old data from file like tdms etcc.
Attachments:
1.png ‏14 KB

Similar Messages

  • Is it possible to start a PCI4472 and a PCI-MIO-16E-1 simultaneously using an analog trigger (with pre-trigger)?

    I would like to start several PCI 4472 and a PCI-MIO-16E-1 simultaneously. All boards are connected via a RTSI cable.
    My program works fine if I use software trigger, or an analog trigger from a PCI4472 channel. However, the analog trigger works only when I set pre-trigger (or pre-scan) to 0.
    Is it possible to start a PCI4472 and a PCI-MIO-16E-1 simultaneously using an analog trigger (with pre-trigger)?
    Thanks.
    Ian Ren

    Hi, Bill
    I think it is possible to set more than 38 pre-trigger scans on a single 4472 card. I've done this before. You can verify this by running the Labview example "Acquire N - Multi-Analog Hardware Trig.vi" which comes with LabView.
    What I try to do but without success/luck so far is to start data acquisitions of several 4472 cards and a PCI-MIO-16E-1 card using an anlog trigger (with pre-trigger).
    Thanks for your help.
    Ian

  • Pre-populate and post populate fields

    What is the difference between pre-populate and post populate fields.
    How we test the pre-populate and post populate fields.

    Parth, one problem with your approach is he will submit PDF and therefore you won't be able to put the PDF in a variable that's suppose to contain just xml.
    The prepopulation should be the same. If you start off with an xdp, then you will call a render service that merges data with your xdp to create a PDF.
    Now when you submit, you will submit the entire PDF back in the Document Form variable. In Workbench, you can use the FormDataIntegration service to extract data from that PDF that's being stored under Document Form var/object/document and put it in an xml variable. Then you can just use xPath to do your condition.
    I'm assuming you'll just pass that same Document Form variable to the next step, because if you do any change to the PDF it'll brake the signature.
    Let me know if I missed anything.
    Jasmin

  • What are  Pre Database Copy and Post data base copy activity list, Pre Migration and Post Migration activity list from SAP BW 7.0 to SAP BW 7.4 SPS6.

    BW on HANA :  Pre Database Copy and Post data base copy activity list, Pre Migration and Post Migration activity list from SAP BW 7.0 to SAP BW 7.4 SPS6.
    We are trying to copy database from SAP BW7.0 to SAP BW on HANA 7.4 SPS6 so we are in search for list of steps or activities during database copy both pre and post steps.
    Along with the above we are in search of Pre and post migration steps ones database is transferred successfully from oracle to HANA on 7.4 SPS6.
    Kindly help us in getting the exact course of action as requested.
    Thanks and Regards,
    Lavina Joshi

    Hi Lavina,
    try this link for starters: Upgrade and Migration - BW on HANA | SAP HANA
    Points to remember are:
    Preparation:
         -- Hardware Sizing
         -- Preparation of Data Centres
         -- HANA Hardware preparation
         -- System Landscape Readiness (upgrade software downloads, system readiness checks, etc)
         -- House Keeping activities on BW system (data clean up, etc)
    Post Installation:
         -- Sanity checks / Preparation and License checks
         -- JAVA Configurations
         -- Infoprovider conversions 
    Overall Stages are described below:
    # Environmental setup (HANA box)
         -- Initial system checks and Building Activities (system copy, Appln server setups, etc)
    # System readiness
                   - ZBW_HANA_COCKPIT Tool
                   - ZBW_HANA_CHECKLIST Tool
                   - ZBW_ABAP_ANALYZER Tool
                   - ZBW_TRANSFORM_FINDER Tool
                   - SIZING Report
                   - System Clean up Activities
                   - Impact of 7.4 on source system checks
                   - Java Upgrade for portal
    # DMO Stages
                   - Preparation & Pre Migration checks
                   - Execution / Migration
                   - Post Migration Activities
    # Testing Phase
                   - Source system checks/Activities
                   - System and Integration Testing
                   - End to End Testing
                   - Performance testing
                   - Reports
                   - BO reports / Interfaces
    Do let me know if you require any further information.
    Regards,
    Naren

  • I need to organize medical photos.  They are pre-op and post-op photos.  What is the best way to organize them.  should I use iphoto or is that too much trouble.

    I need to organize medical photos.  They are pre-op and post-op photos.  What is the best way to organize them?  should I use iphoto or is that too much trouble? Since they occur on different days, organizing by event does not seem appropriate.  Also, organizing by face is not always possible as some are close up photos of an individual eye.  Surely, this has been done before, and I don't want to spin my wheels reinventing this.

    Who can tell with that amount of information to work with?
    iPhoto is not limited to Events or Faces as organizational tools - Albums, Keywords, various forms of metadata can be leveraged for use as well, and these are often more flexible. Really it's up to you to look at the tools available and see if they suit your usage scenario.

  • Networkmanager dispatcher - no pre-up and post-down support

    Hey guys,
    i swichted some weeks ago from ubuntu to arch, and my system runs like it should. with one exception. Cause im using arch on a laptop und change locations (my home, university, parents house, ..), i wrote i little dispatcher script that do diffrent things based on the essid i connected to. But i recognized that the dispathcer only use the up and down action. is there a possibility to add the support of pre-up and post-down actions, or anything similar? i red that netcfg can do this, is ther a  way to combine it?
    greetz corubba
    EDIT: i noticed that the action up is called after the interface is up, and down after it is brought down. So im looking for a pre-up and pre-down action.
    Last edited by Corubba (2010-11-15 13:20:45)

    Corubba wrote:Hey guys,
    i noticed that the action up is called after the interface is up, and down after it is brought down. So im looking for a pre-up and pre-down action.
    hi all,
    in the past days I also trid this out and you are right: in "pre-up" is called _after_  a connection have been established to a network, exactly like "up" does.
    should we maybe fill a bug in orger to get this fixed?
    regards,
    Guglie

  • Pre invoice and Post invoice

    Hi All,
      What is the difference between Pre Invoice and Post Invoice?
    regards
    shashikanth naram

    Hi,
    When 'GR-based IV' is flagged in PO, Now when GR posted with reference to PO item serves as reference document for follow-on postings. This Ref Doc-LFBNR is linked to invoice posting,This assignment between GR and invoices can be seen PO history tab -GR/IR assign
    when GR based IV is not ticked then the LFBNR is blank, hence it will not propose any value in Miro
    Please also refer following note
    1827732 - Valuation when GR-Based IV (EKPO-WEBRE) is set in the PO item.

  • GR with Excise Capuring and Posting ( Movements 103 and 105 )

    Hi
    We have the requirement like .
    1. we have the Business requirement like GR 103 and 105 movements types .
    our requirement: while doing GR with Movement type 103 will capture the Excise . While releasing Blocked stock to unrestricted will post the excise . please advise me how to configure .
    thanks
    @sakhi

    Hi Sakhi,
    When you post 103 the system takes it as GR blocked stock,not a valuated stock in plant.
    But you can capture excise invoice at the time of 105 --i.e Release blocked stock and Post the same in J1IEX  transaction.
    With Regards,
    Vijaykumar Panchagattimath

  • Aero Problems on Win 7 With PRE, PSE and OE

    I am using nVideo GeForce GTS 8800 (rev 8.17.12.9573 2/9/2012) and having trouble with Aero on PRE, PSE and OE version 10 and a dual monitor system. The problems are numerous and involve trying to resize windows and display resolutions with these products. My question is: Do products these fully support the Aero display modes? The problems don't show up on any other products such as BLENDER, VLC, GIMP etc. and other proprietary and GPL products.

    Hi,
    PSE shows the fewest problems, in minimum mode try expanding the window by clicking at the top (of PSE) and dragging up or down. The arrow should change to double sided and you should be able to expand or contract the window. Try it on another app. It should work okay (window expands or contracts). Also the top of the window should be translucent to be compliant with Aero (so should PRE and OE be translucent at the top).
    I could live with that; but if I try to maximize the OE window in the left monitor, I only get one half of OE. The other half is off the left side of monitor.
    The biggest problem is that PRE does not maximize in a 1920 x 1080 monitor; it goes back to the right monitor in 1280 x 720 res. mode. ALL of the other Windows (non Adobe) apps do work all right even BLENDER and VLC which are GPL. This means that I can't run PRE at higher res. which is a major advantage for a video editing app.
    It appears that these programs from Adode may not be using the Microsoft Windows API calls, that the other Windows programs use to resize. If this is the case; it shows sloppy programming techniques.
    I do have latest drivers from nVideo.
    I hope that this clear; it is a little complicated. Anyhow, thanks for your interest and for giving me the opportunity to vent my frustration.
    John H

  • Pre-export    and post-import tasks

    oracle8i/windows 2000 server.
    i would like export my database from my organisation and import into my local
    machine at home.
    what are the precautions,tasks and post tasks
    ,i have to take during import and export.
    c.santhanakrishnan.

    Why are you exporting your organisations db to import at home? Are you aware of the privacy issues this raises? Do you have approval from your organisation?
    Exporting -
    o NLS_LANG is set correctly
    o Available space on disk
    o consistent = y
    o recordlength=65535
    o direct=y
    o log=<logfile_name>
    o do you require grants, indexes?
    o A feedback for progress,
    o But even with consistent = y that is table consistent and there is a probability you may run into a problem with FKs (usually when exporting large amounts of data) and tables in the beginning of the export have child tables near the end of the export which (the child tables) have records inserted during the export which won't have a parent record on import and will be rejected.
    Importing -
    Some depend on the data volume. Do you want to -
    o NLS_LANG is set correctly
    o drop, re-create indexes to speed up import
    o put the database in NOARCHIVELOG (if possible)
    o need to also create public synonyms
    o check triggers and if they will fire?
    o need to create the schema/user to import (if it doesn't exist in the import database and is not a full export) ?
    o do the tablespaces that the export was taken from exist in the import database?
    o disable/enable FK constraints
    o enough UNDO space
    o enough archive log space (if in ARCHIVELOG mode)
    oset a buffer (size depends)
    o set a feedback to monitor progress (or check rows_processed in v$sql)
    o if data already exists in the tables, does it need to be truncated (which raised other issues, constraints etc, take a backup prior to truncate?)
    I probably missed stuff too.

  • Invoice created with out GR and posted after GR matches

    Hi ,
    can you help any body regarding this issue, where i need to do settings and all.
    GE Water has the ability to enter invoices even if there is no goods receipt in SAP.  Invoice is entered and "R" block is placed on the document in SAP.  MRBR is run nightly to look at "R" blocks and see if they can be matched with goods receipt.  If match is found SAP removes "R" block and invoice pays.  If no match found, invoice still sits in SAP unpaid. Need SAP to be configured to do this in GE Energy. Currently if no goods receipt is found for an error message is received that no document exists.  On blanket PO's blank lines are shown and invoice can not be entered without creating GRIR issues and payments not posting to the correct accounts
    Thanks
    Madhu
    Edited by: madhu mandapati on Oct 23, 2008 12:47 AM

    Hi,
    This essentially shoud be put in MRBR only by creating ZMRBR probabaly
    You need ABAP development to put your desired logic
    You are entering invoice before GR it means that you are doing PO based invoice verification
    Then at the time of releasing blocked invoices you want system to Check the goods receipt
    and provide you the message and or work further as directed
    Please discuss with technical ABAP team
    Diwakar

  • Once I've identified who I want to share photos with via Photostream and posted pictures, what happens next?

    I've created my photo stream group of pictures and selected the contacts with whom to share them and I've posted them. What is the next step?  How are the contacts notified?

    There are several home inventory apps available in the app store. Many include the ability to add photos and most would probably work on the Touch as well as an iPad.

  • Developing a code with multiple GET and POST requests

    hi all,
    i have a requirement to upload a video file to a server and i need your help to develop the code using java .Basically it has several steps like
    1) A GET request should be made to the server with parameters like file name,size etc
    2) The response from the server is an XML document which contains a list of URLs to upload the file to.
    3) Get the URL from the XML and POSTyour file to the URLs listed in response.
    4) After POSTing the video file to the url(s) you received in the Create Video request, a GET request should be made to this URL to signal that a video has finished uploading in its entirety and check for the response code to see if the file is uploaded
    5) If an upload cannot be completed signal that has uplaod has failed by by making another GET request to the server
    I am not sure how can i implement all these steps in one single java program. Can someone please advise
    Thank You

    ForumKid2 wrote:
    Although you could technically do it all in one Java class, it makes no sense.Agreed 100%. But the question was how to do it in one program, which of course could contain numerous classes. (Unless the OP is confused about what a program is...)

  • How to merge source data with RFC response and post back again as Idoc

    Hi All,
    This is the requirement we have for an interface
    The legacy application is sending Vendor master to PI 7.0
    If it is new vendor then it is send as an Cremas Idoc into SAP. Legacy (New Vendor) -
    > PI 7.0  -
    >Cremas Idoc SAP
    If it is changed Vendor legacy will only send changed fields for that Vendor.In PI we would like to call an RFC which will return all the data for that changed Vendor Number and then merge the RFC response with changed data from legacy and then send it to SAP as Cremas Idoc again with all values.
    I know these can be achieved using Proxy by custom Abap Code in SAP.But we would like to avoid it.
    How can we achieve it?
    1.RFC lookup - Shall we use these , when PI receives changed Vendor from legacy ,it will call RFC using RFC lookup and the response message from RFC lookup should be merged with source data .Is this possible?
    2.Shall we achieve this using BPM ?Is it feasible and How?
    Any Help greatly appreciated
    Thanks,
    V

    If it is changed Vendor legacy will only send changed fields for that Vendor.In PI we would like to call an RFC which will return all the data for that changed Vendor Number and then merge the RFC response with changed data from legacy and then send it to SAP as Cremas Idoc again with all values.
    I am not sure why you want to pull whole data from R3 and send back to R3.
    you can follow any of these approach..
    if you have any indicator for new/ changed cusotmer in the legacy data then trigger CREMAS IDoc accordingly.
    mapping rules will be diffrent for New and changed CREMAS idoc.
    otherwise just do RFC look up for each record then based on the output(new/changed) create or update cusotmer data through CREMAS IDoc.
    when changing the customer through CREMAS no need to pass whole data again. it is enough if pass the changed fields. offcourse qualifier values  for segments will differ.

  • Thinking of pre installation and post installation

    Hi,
    I understand the following is very common question, but I need to know the answer specific to my case.
    my case:
    My production database has around 100 tables in a single database/instance (no clusterred yet); 3 of tables are growiing very fast, in less than 1 month can use more than 1TB hard disk space, and we have to keep 6 months' data (so in a half year, disk space will be used up 6-10 TB by these 3 tables; every day there are lots of operations on these tables, update, insert and query (no delete, every operation invoked heavy API stored precedure.
    questions.
    1. if applying cluster, installing in 2 nodes, 5 nodes, 10 nodes, can I expect linear performance increasing to 15%,18%,20%?
    2. install nodes in one physical box (usiing vmware) better or 1 node 1 box better, in terms of performance?
    3. afer installing Grid, the database install in one of those node or iin another box?
    4. any doc telling how to install/configure the database upon cluster/nodes?
    Thanks
    John

    992202 wrote:
    Firstly install nodes, to be able to build up a cluster, on top of it, install multiple database instances, then we can configure a RAC database, which will still look like a single database from external client point of view.Correct. Grid installation first to build the cluster. Then RAC installation to provide a cluster database infrastructure across grid nodes. Then create a physical cluster database with a database engine on each cluster node.
    now going to the point how I think and expect performance increasing against my case.
    without clustered database (say in a standalone or singleton instance), executing 100 database operation (query, insert, update, by calling API, stored procedures) takes 5 minute; now in RAC database in which 5 nodes, 5 database instances in 5 physical machines, 100 operation may distribute to all nodes, namely each database instance may just need to handle 20 request (ideally); this way performance should be increased lots.Emphasis on should.
    Yes, RAC provides for scalability. However, no cluster (RAC and anything else) is capable of simply taking any program (or stored procedure), and make that now run faster across multiple cluster nodes.
    The program (stored proc or SQL in RAC's case) need to be able to run in parallel, deal with concurrency, and do this in a thread-safe way. The cluster provides the tools for this. The program needs to support or use these tools.
    And, I also think, in 10 nodes cluster, it's better that installs 10 database instances than installing only 5, in terms of performance.Of course, you want to use all cluster nodes. But you can for example create server pools in the cluster and more than one cluster database - and run these on your cluster. A cluster does not necessarily mean only supporting a single cluster database.
    A report shows CPU usage is very high constantly during executing (now we use 32 CPU in one box), however I/O and network usages look normal.Unlikely that a RAC will do better in this case - running processes within the same h/w boundary on 32 CPUs will usually be faster than running the same processing across 16 server nodes, with 2 CPUs per server.
    Can I still expect linear performance increasing? if no, what can RAC database help me out, or any other way can help me my case?As I said - RAC provides incredible scalability (e.g. my biggest RAC does up to 35,000 rows inserts per second, and uses parallel processing to process 30+ million row data sets using several table joins in less than 120 seconds).
    But for any cluster - scalability needs to be part and parcel of the software loads run on that cluster. If the workload of the software cannot be parallelised, if workloads are not designed to be scalable, then no amount of clustering will improve the runtimes of such workloads.
    My suggestion is to first identify the CPU related performance problem you have. You need to know WHAT the problem is, in order to decide on how to address it. And whether RAC is a solution to the performance problem.

Maybe you are looking for