Peak detection with two thresholds

Dear colleagues!
Excuse me, but I need example or any help for peak detection with two thresholds. Namely two adjustable thresholds with standard peak detector for creating a detection's bandwith or range. 
Peak detector have only one level threshold... But why only one...
Solved!
Go to Solution.

If I understand correctly, you want to set two thresholds as shown in the image below.
The top peak is easy, so no need to go into details for that one.
The peak for the second threshold (red) will be a bit more difficult to do.
You would need to filter out the signals that cross the top threshold (blue).  This would be done by identifying the values as they cross the top threshold boundary.  You would then need to build a "window" around them (we'll come back to this one).
You would need to define a range of valid values for the 2nd peak, which are from the bottom threshold to the top threshold. Search for the number of values that cross the top threshold plus one.  The "plus one" is the one you will want to search. With all the other peaks that you do not want, you can create windows around them.  The remaining one is the peak of interest.
Message Edited by Ray.R on 10-23-2009 08:21 AM
Attachments:
2Thresholds.PNG ‏6 KB

Similar Messages

  • Peak Detection with Base package

    Hello,
    I'm constrained by a budget so I can't purchase the analysis package
    just for peak detection. Does anyone have any hints on how to search a
    1-D array for peaks without using the pre-made VI's? I've been pulling
    my hair out over this one for a week!
    Thanks in advance.

    Labviewguru wrote in message news:<[email protected]>...
    > Jason,
    >
    > I would suggest looking online for a "peak detection" algorithm.
    > There are a number of ways to do it. The LabVIEW peak detector using
    > a least squares quadratic fit to find the peaks, with a filter of
    > amplitude and width. This can also be done much simpler ways. You
    > can try applying a smoothing filter (use a moving or windowed average)
    > and then look for peaks that way.
    >
    > Please post if you need more assistance. Also, what are you trying to
    > do with the peak detection. This would be helpful in providing
    > further assistance.
    >
    > Good luck
    Labviewguru
    Thanks for the advice. Yeah, the NI home page tells me that th
    ey use a
    least square quadratic fit to find the peaks. I'm not sure on how to
    implement that. I'll do a web search on that one. I'm not dealing with
    a very noisy signal so I don't think that I need to do a moving
    average. The amplitude filter seems pretty easy, I just compare each
    point to a preset threshold. However my peaks are composed of more
    than one sample so each peak gives me a bunch of "true" values. I only
    want one count per peak. The way I thought of doing it was using
    hysteresis and starting a subroutine when the upper limit boolean went
    hi, then resetting the subroutine when the lower limit boolean went
    hi. So essentially i need to have labview look for a high, THEN look
    for a low, THEN reset over and over again until the sample array is
    finished. I'd love to work on this for days but I've already put too
    much thought into it.

  • Fetch and peak detection all channels of PXI-5105 with 4M record... HELP!

    Dear collegaues!
    Please help me to improve performance my application, see attachment, and sorry for my English.
    So, my task is to fetch and peak detection all (eight) channels of PXI-5105 with 4M record and sample rate 4M with loop 1 sec...
    Inputs of all my channels are wiring to NaI detectors with 0,5...1 microsec of pulse width (really) and freq from 0 kHz to no more than 40 kHz.
    Why I've selected 4M record and 4M sample rate namely? Answer is that I've tested PXI-5105 previously by generator 40 kHz and 0,5 microsec width pulse. It is working fine and peak detection indicate for me 40000 pulses/sec. If I set lower than 4M record and 4M sample rate it is no working. In my honest opinion 4M record and 4M sample rate are very min settings.
    In the present time peak detection working only 6 channels... When I've connected to diagram more than 6 "peak detector.vi" - I see the error "...out of memory...".
    Please advise me, what is to be done for that is all working fine.
    Solved!
    Go to Solution.
    Attachments:
    consumer-producer7.vi ‏44 KB

    What you are running into is an out of memory error in LabVIEW.  You have enough onboard memory to capture 4M samples per channel on the digitizer.  The issue is with fetching and manipulating that data in your LabVIEW application.  You will want to step back and take a look at how you are handling your data to understand why that is happening.
    1) 4M samples/ch = 4M Samples x 2Bytes/sample/ch = 8M Bytes/ch
    2) Expanding to 8 channels creates 64M Bytes of data in the raw binary format
    3) You are scaling your data by fetching in a 1D WDT format.  This stores each sample in a 32 bit double, expanding the memory to 256M Bytes (in addition to timing information)
    4) By splitting up the array of waveforms and branching data it you can easily create copies of this data, and if your consumer loop is not completed with the last data, you may be trying to capture a whole new set, creating yet another copy.
    So you can see that while you have 1.5GB of controller memory, when dealing with large arrays of data you can easily eat up that memory.  There are several things you can try to make your application more efficient.  You could work with an unscaled binary data format, you can wire the array of waveform directly to the peak detect vi (instead of creating 8 copies, you will have a single copy with arrays of output) or you could revisit the record size you have chosen (experimenting with your threshold and width settings might help you to get the results you want with smaller record lengths).
    -Jennifer O.

  • Peak detection (threshold detector) from spreadsheets.

    Hello!
    I am working on VI that reads two signals, detects peaks that are above a specific treshold and then estimates time difference between these peaks in samples and in seconds.
    Recently I have obtained two waveforms using an oscilloscope in a laboratory. Then I saved these two graphs in a spreadsheet in Excel format (first number is time, second is amplitude). Time difference between them is really small, just a few nanoseconds according to the oscilloscope. I think treshold should be 0.3.
    My question is how to make Threshold Detector.vi read values from two excel files? Or maybe I need to use another instrument for this purpose. I am quite confused.
    I attached all materials I have.
    Any help is highly appreciated!
    Thank you.
    Attachments:
    Inception 2.vi ‏55 KB
    position S1-S2a.jpg ‏3004 KB

    I tried Read From Spreadsheet File.vi, but have not got any positive result. Probably it reads my spreadsheets, but peak detection does not work properly. When I set up any threshold in Threshold Detector.vi that is more than zero (for example, 0.01), final results turn to zero. The same occurs when indexes in Index Array  are not zeroes. 
    I attached my vi saved for LV2009. Please, have a look.
    Thanks.
    Attachments:
    Inception 2.vi ‏49 KB
    Peak detection.jpg ‏108 KB

  • Need Help with peak detection

    Hello,
    I am in need of some help using the peak detection. I have an array of values that I need to find the centroid (peak) of. I am only interested in finding the centroid of a large peak. The problem I keep having is that the peak detection VI finds every little peak above the threshold when I really want the overall average peak. Please look at the graph in the picture file to see what I mean.You can see that the main peak has jagged edges. The peak detection in Labview will find every one of these jagged edges and report the location back as a peak. I am only interested in the overall shape of the peak. How can I filter out the multiple peaks and only report the centroid of the desired shape? There is an example program in the labview package called "advanced peak detection point by point" but I cannot figure out how to employ it in this application.
    Thanks in advance for any help.
    -Mark
    Attachments:
    Array values.PNG ‏9 KB

    If I were you, I wouldn't even use peak detection.
    The point of peak detection is to find multiple peaks, like in a sine wave, etc.
    If you just want the maximum:
    Just use 'Array Max & Min'
    The 'max value' equates to your y-value.
    The 'max index' can be used to find your x-value
    Message Edited by Cory K on 05-07-2009 11:52 AM
    Cory K
    Attachments:
    Peak.PNG ‏3 KB

  • WA Multiscale Peak Detection VI

    Dear users,
    using "WA Multiscale Peak Detection VI', I would expect that peaks detected for a waveform Y are the same as valleys detected for the waveform -Y (y-values multiplied by -1, i.e., waveform flipped upside-down), see the snippet, please. This is not the case, when doing the analysis on some data.
    Opening and examining the subVIs, I only see reversing the synthesis and analysis filter frequencies and changing "greater" for "lower", as for the difference between "peaks" and "valleys" setting.
    The documentation does not state any substantial difference between "peaks" and "valleys".
    Any thoughts on it?
    Solved!
    Go to Solution.

    OK, so after some more analysis:
    One has to "y-flip" the threshold for the valleys. I.e., if one searches for peaks with a threshold T and want to search for similar looking valleys, then there are two solutions:
    a) flip the data (*(-1)) and keep the threshold
    b) flip the threshold (*(-1)) and keep the data.
    Having no option for searching the valleys offered by this VI, I'd came up with the a) solution quickly, most probably -- it's hypothetical, right? Having this options and seeing the results, confused me highly.

  • Calculating BPM using either Peak Detection point by point or FFT

    Hi Guys
    Im new to Labview and have absolutely no idea on programming and stuff. Im doing a project on Heart Rate monitor.
    I'm using labview to read the analog input to an Arduino Mini. In my attached VI im using Peak Detection Point by point to calculate the BPM but it doesnt seem to work. I took references from several VIs to arrive at my VI. 
    My instructor told me I could try using FFT to calculate the BPM as well but Im not sure how to carry it out in Labview.
    Hope you guys can help me with this.
    Thanks alot!
    Attachments:
    heart signal.jpg ‏43 KB
    Heart Monitor.vi ‏24 KB

    Ok, we have some problems here.
    1.  The Data Bits property is the number bits for a single character that is being transmitted.  You should not use that.  Since you are using an Arduino, it should be sending the termination character.  So just tell the VISA Read to read maximum number of bytes you expect from a single message or just some large number (like 25).  The VISA Read will stop reading the port when it encounters the termination character.
    2. The String Subset is not doing anything.  Just remove it.
    3.  You should move your  Wait to be outside of the case structure.  As is currently written, if you are not taking readings you will use up all of your CPU.
    4.  You should have labels for all of your controls and indicators.
    5.  Your time calculation is completely wrong.  You want to subtract the time of the previous peak from the current peak.  I recommend you use a Feedback Node to do this.
    Here's a slightly cleaned up version of your code.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Heart Monitor_BD.png ‏42 KB

  • Election problem after repeated split-brains with two nodes

    Hi
    I'm using a customized source based on BDB-5.1.19 (excxx_repquote)
    with two site one - MASTER and the other SLAVE...
    nsite=2
    ack=quorum
    - the master is writing to quotedb at a rate of 10 txn per sec
    - the test consist to isolate the client from the master (split brain) and reconnect it after a random time include from 1sec to 10sec
    the test run well about 10 times but at a moment the process slave receive DB_EVENT_REP_ELECTION_FAILED
    and the master enter in election mode and never exit from the CLIENT mode. I must say that to freeze the client I decide to kill me (kill -9 my pid) when I receive such event...
    here is the verbose log on the master...
    [1307872770:871621][6510/47655809107168] MASTER: rep_send_function returned: 110
    [1307872770:973655][6510/47655809107168] MASTER: bulk_msg: Send buffer after copy due to PERM
    [1307872770:973667][6510/47655809107168] MASTER: send_bulk: Send 266 (0x10a) bulk buffer bytes
    [1307872770:973672][6510/47655809107168] MASTER: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 68 eid -1, type bulk_log, LSN [21][986648] perm
    [1307872770:973693][6510/47655809107168] MASTER: will await acknowledgement: need 1
    [1307872771:26623][6510/47655809107168] MASTER: rep_send_function returned: 110
    [1307872771:126380][6510/1162996032] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type log, LSN [21][946345]
    [1307872771:126407][6510/1162996032] MASTER: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 68 eid -1, type dupmaster, LSN [0][0] nobuf
    [1307872771:126695][6510/1162996032] MASTER: rep_start: Found old version log 17
    [1307872771:126753][6510/1162996032] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 68 eid -1, type newclient, LSN [0][0] nobuf
    [1307872771:126833][6510/1183975744] CLIENT: starting election thread
    [1307872771:126876][6510/1183975744] CLIENT: Start election nsites 2, ack 1, priority 100
    [1307872771:126890][6510/1183975744] CLIENT: Election thread owns egen 69
    [1307872771:127423][6510/1173485888] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type newclient, LSN [0][0]
    [1307872771:130079][6510/1183975744] CLIENT: Tallying VOTE1[0] (2147483647, 69)
    [1307872771:130113][6510/1183975744] CLIENT: Beginning an election
    [1307872771:130134][6510/1183975744] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 68 eid -1, type vote1, LSN [21][986728] nobuf
    [1307872771:130147][6510/1173485888] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 68 eid -1, type master_req, LSN [0][0] nobuf
    [1307872771:130438][6510/1152506176] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type vote1, LSN [21][946437]
    [1307872771:130460][6510/1162996032] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type alive, LSN [21][986728]
    [1307872771:130467][6510/1152506176] CLIENT: Updating gen from 68 to 70
    [1307872771:130482][6510/1162996032] CLIENT: Received ALIVE egen of 71, mine 69
    [1307872771:130503][6510/1162996032] CLIENT: Election finished in 0.003602000 sec
    [1307872771:130515][6510/1162996032] CLIENT: Election done; egen 70
    [1307872771:130534][6510/1152506176] CLIENT: Received vote1 egen 71, egen 71
    [1307872771:130581][6510/1152506176] CLIENT: Tallying VOTE1[0] (0, 71)
    [1307872771:130593][6510/1089075520] CLIENT: starting election thread
    [1307872771:130619][6510/1152506176] CLIENT: Incoming vote: (eid)0 (pri)100 ELECTABLE (gen)70 (egen)71 [21,946437]
    [1307872771:130642][6510/1152506176] CLIENT: Not in election, but received vote1 0x282c 0x8
    [1307872771:130674][6510/1089075520] CLIENT: Start election nsites 2, ack 1, priority 100
    [1307872771:130692][6510/1089075520] CLIENT: Election thread owns egen 71
    [1307872771:130704][6510/1194465600] CLIENT: starting election thread
    [1307872771:130733][6510/1194465600] CLIENT: Start election nsites 2, ack 1, priority 100
    [1307872771:132922][6510/1089075520] CLIENT: Tallying VOTE1[1] (2147483647, 71)
    [1307872771:132949][6510/1089075520] CLIENT: Accepting new vote
    [1307872771:132958][6510/1089075520] CLIENT: Beginning an election
    [1307872771:132973][6510/1089075520] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid -1, type vote1, LSN [21][986728] nobuf
    [1307872771:132985][6510/1194465600] CLIENT: election thread is exiting
    [1307872771:133012][6510/1089075520] CLIENT: Tallying VOTE2[0] (2147483647, 71)
    [1307872771:133037][6510/1089075520] CLIENT: Counted my vote 1
    [1307872771:133048][6510/1089075520] CLIENT: Skipping phase2 wait: already got 1 votes
    [1307872771:133060][6510/1089075520] CLIENT: Got enough votes to win; election done; (prev) gen 70
    [1307872771:133071][6510/1089075520] CLIENT: Election finished in 0.002367000 sec
    [1307872771:133084][6510/1089075520] CLIENT: Election done; egen 72
    [1307872771:133111][6510/1089075520] CLIENT: Ended election with 0, e_th 1, egen 72, flag 0x2a2c, e_fl 0x0, lo_fl 0x6
    [1307872771:133170][6510/1173485888] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type alive, LSN [0][0]
    [1307872771:133187][6510/1173485888] CLIENT: Racing replication msg lockout, ignore message.
    [1307872771:173744][6510/1162996032] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type vote2, LSN [0][0]
    [1307872771:173769][6510/1162996032] CLIENT: Racing replication msg lockout, ignore message.
    [1307872771:231593][6510/1183975744] CLIENT: Ended election with 0, e_th 0, egen 72, flag 0x2a2c, e_fl 0x0, lo_fl 0x1c
    [1307872771:231629][6510/1183975744] CLIENT: election thread is exiting
    [1307872777:443794][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307872971:644194][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307873165:844583][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307873360:44955][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307873554:245347][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307873748:445736][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307873942:646117][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    [1307874136:846509][6510/1131526464] CLIENT: init connection to site 2.0.0.210:12345 with result 115
    .... and infinite stay to this situation
    My question is why the Master is suddenly transformed into CLIENT and why it's never returning to the MASTER
    Thanks in advance ...
    here is the log for the client
    [1307872315:455113][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type log, LSN [21][984396]
    [1307872315:455134][1282/1160603968] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type log, LSN [21][984483] perm
    [1307872315:609962][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][984733] perm
    [1307872315:764958][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][984986] perm
    [1307872315:919962][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][985238] perm
    [1307872316:75018][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][985491] perm
    [1307872316:229959][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][985741] perm
    [1307872316:384949][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][985993] perm
    [1307872316:499899][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][986141] perm
    [1307872316:539895][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type log, LSN [21][986221]
    [1307872316:540078][1282/1171093824] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type log, LSN [21][986307]
    [1307872316:540100][1282/1160603968] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type log, LSN [21][986394] perm
    [1307872316:694950][1282/1171093824] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type bulk_log, LSN [21][986648] perm
    [1307872316:847349][1282/1129134400] MASTER: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid -1, type log, LSN [21][946345]
    [1307872316:847698][1282/1171093824] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type dupmaster, LSN [0][0]
    [1307872316:847999][1282/1181583680] MASTER: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type newclient, LSN [0][0]
    [1307872316:848168][1282/1171093824] MASTER: rep_start: Found old version log 17
    [1307872316:848222][1282/1181583680] CLIENT: Racing replication msg lockout, ignore message.
    [1307872316:848398][1282/1171093824] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid -1, type newclient, LSN [0][0] nobuf
    [1307872316:848504][1282/1192073536] CLIENT: starting election thread
    [1307872316:848542][1282/1192073536] CLIENT: Start election nsites 2, ack 1, priority 100
    [1307872316:848566][1282/1192073536] CLIENT: Election thread owns egen 71
    [1307872316:849634][1282/1192073536] CLIENT: Tallying VOTE1[0] (2147483647, 71)
    [1307872316:849654][1282/1192073536] CLIENT: Beginning an election
    [1307872316:849680][1282/1192073536] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid -1, type vote1, LSN [21][946437] nobuf
    [1307872316:851403][1282/1160603968] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type vote1, LSN [21][986728]
    [1307872316:851448][1282/1160603968] CLIENT: Received vote1 egen 69, egen 71
    [1307872316:851470][1282/1160603968] CLIENT: Received old vote 69, egen 71, ignoring vote1
    [1307872316:851481][1282/1160603968] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid 0, type alive, LSN [21][986728] nobuf
    [1307872316:851538][1282/1171093824] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 68 eid 0, type master_req, LSN [0][0]
    [1307872316:851558][1282/1171093824] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid 0, type alive, LSN [0][0] nobuf
    [1307872316:854254][1282/1160603968] CLIENT: /opt/bdb/ rep_process_message: msgv = 5 logv 17 gen = 70 eid 0, type vote1, LSN [21][986728]
    [1307872316:854275][1282/1160603968] CLIENT: Received vote1 egen 71, egen 71
    [1307872316:854317][1282/1160603968] CLIENT: Tallying VOTE1[1] (0, 71)
    [1307872316:854339][1282/1160603968] CLIENT: Incoming vote: (eid)0 (pri)100 ELECTABLE (gen)70 (egen)71 [21,986728]
    [1307872316:854353][1282/1160603968] CLIENT: Existing vote: (eid)2147483647 (pri)100 (gen)70 (sites)2 [21,946437]
    [1307872316:854369][1282/1160603968] CLIENT: Accepting new vote
    [1307872316:854379][1282/1160603968] CLIENT: Phase1 election done
    [1307872316:854395][1282/1160603968] CLIENT: Voting for 0
    [1307872316:854407][1282/1160603968] CLIENT: /opt/bdb/ rep_send_message: msgv = 5 logv 17 gen = 70 eid 0, type vote2, LSN [0][0] nobuf
    [1307872317:960344][1282/1192073536] CLIENT: After phase 2: votes 0, nvotes 1, nsites 2
    [1307872317:960389][1282/1192073536] CLIENT: Election finished in 1.111809000 sec
    [1307872317:960401][1282/1192073536] CLIENT: Election done; egen 72
    [1307872317:960412][1282/1192073536] CLIENT: Ended election with -30974, e_th 0, egen 72, flag 0x282c, e_fl 0x0, lo_fl 0x0
    Kill me !!
    --- my source
    on the master I run manually :
    txn_rate 1
    loop_rate 10
    loop 1 20000
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2001, 2010 Oracle and/or its affiliates. All rights reserved.
    * $Id$
    * In this application, we specify all communication via the command line. In
    * a real application, we would expect that information about the other sites
    * in the system would be maintained in some sort of configuration file. The
    * critical part of this interface is that we assume at startup that we can
    * find out
    *      1) what our Berkeley DB home environment is,
    *      2) what host/port we wish to listen on for connections; and
    *      3) an optional list of other sites we should attempt to connect to.
    * These pieces of information are expressed by the following flags.
    * -h home (required; h stands for home directory)
    * -l host:port (required; l stands for local)
    * -C or -M (optional; start up as client or master)
    * -r host:port (optional; r stands for remote; any number of these may be
    *     specified)
    * -R host:port (optional; R stands for remote peer; only one of these may
    * be specified)
    * -a all|quorum (optional; a stands for ack policy)
    * -b (optional; b stands for bulk)
    * -n nsites (optional; number of sites in replication group; defaults to 0
    *     to try to dynamically compute nsites)
    * -p priority (optional; defaults to 100)
    * -v (optional; v stands for verbose)
    #include <cstdlib>
    #include <cstring>
    #include <iostream>
    #include <string>
    #include <sstream>
    #include <sys/types.h>
    #include <signal.h>
    #include <db_cxx.h>
    #include "RepConfigInfo.h"
    #include "dbc_auto.h"
    using std::cout;
    using std::cin;
    using std::cerr;
    using std::endl;
    using std::ends;
    using std::flush;
    using std::istream;
    using std::istringstream;
    using std::ostringstream;
    using std::string;
    using std::getline;
    #include <stdio.h>
    #include <readline/readline.h>
    #include <readline/history.h>
    #define     CACHESIZE     (10 * 1024 * 1024)
    #define     DATABASE     "quote.db"
    #define     DATABASE2     "quote2.db"
    const char *progname = "excxx_repquote";
    #include <errno.h>
    #ifdef _WIN32
    #define WIN32_LEAN_AND_MEAN
    #include <windows.h>
    #define     snprintf          _snprintf
    #define     sleep(s)          Sleep(1000 * (s))
    extern "C" {
    extern int getopt(int, char * const *, const char *);
    extern char *optarg;
    typedef HANDLE thread_t;
    typedef DWORD thread_exit_status_t;
    #define     thread_create(thrp, attr, func, arg)                    \
    (((*(thrp) = CreateThread(NULL, 0,                         \
         (LPTHREAD_START_ROUTINE)(func), (arg), 0, NULL)) == NULL) ? -1 : 0)
    #define     thread_join(thr, statusp)                         \
    ((WaitForSingleObject((thr), INFINITE) == WAIT_OBJECT_0) &&          \
    GetExitCodeThread((thr), (LPDWORD)(statusp)) ? 0 : -1)
    #else /* !_WIN32 */
    #include <pthread.h>
    typedef pthread_t thread_t;
    typedef void* thread_exit_status_t;
    #define     thread_create(thrp, attr, func, arg)                    \
    pthread_create((thrp), (attr), (func), (arg))
    #define     thread_join(thr, statusp) pthread_join((thr), (statusp))
    #endif
    // Struct used to store information in Db app_private field.
    typedef struct {
         bool app_finished;
         bool in_client_sync;
         bool is_master;
         bool no_dummy_wr;
    } APP_DATA;
    static void log(const char *);
    void checkpoint_thread (void );
    void log_archive_thread (void );
    void dummy_write_thread (void );
    class RepQuoteExample {
    public:
         RepQuoteExample();
         void init(RepConfigInfo* config);
         void doloop();
         int terminate();
         static void event_callback(DbEnv* dbenv, u_int32_t which, void *info);
         void print_stocks_size(Db *dbp);
    private:
         // disable copy constructor.
         RepQuoteExample(const RepQuoteExample &);
         void operator = (const RepQuoteExample &);
         // internal data members.
         APP_DATA          app_data;
         RepConfigInfo *app_config;
         DbEnv          cur_env;
         thread_t ckp_thr;
         thread_t lga_thr;
         thread_t dmy_thr;
         // private methods.
         void print_stocks(Db *dbp);
         void print_env(DbEnv *dbenv);
         void prompt();
    RepQuoteExample *g_runner=NULL;
    RepConfigInfo *g_config=NULL;
    class DbHolder {
    public:
         DbHolder(DbEnv env, const char _dbname) : env(env)
              dbp = 0;
              if (_dbname) dbname=_dbname;
              else dbname=DATABASE;
         ~DbHolder() {
         try {
              close();
         } catch (...) {
              // Ignore: this may mean another exception is pending
         bool ensure_open(bool creating) {
         if (dbp)
              return (true);
         dbp = new Db(env, 0);
         u_int32_t flags = DB_AUTO_COMMIT;
         if (creating)
              flags |= DB_CREATE;
         try {
              //dbp->open(NULL, DATABASE, NULL, DB_BTREE, flags, 0);
              //dbp->open(NULL, dbname, NULL, DB_BTREE, flags, 0);
              dbp->open(NULL, NULL, dbname, DB_BTREE, flags, 0);
              return (true);
         } catch (DbDeadlockException e) {
         } catch (DbRepHandleDeadException e) {
         } catch (DbException e) {
              if (e.get_errno() == DB_REP_LOCKOUT) {
              // Just fall through.
              } else if (e.get_errno() == ENOENT && !creating) {
              // Provide a bit of extra explanation.
              log("Stock DB does not yet exist");
              } else
              throw;
         // (All retryable errors fall through to here.)
         log("please retry the operation");
         close();
         return (false);
         void close() {
         if (dbp) {
              try {
              dbp->close(0);
              delete dbp;
              dbp = 0;
              } catch (...) {
              delete dbp;
              dbp = 0;
              throw;
         operator Db *() {
         return dbp;
         Db *operator->() {
         return dbp;
    private:
         Db *dbp;
         DbEnv *env;
         const char *dbname;
    class StringDbt : public Dbt {
    public:
    #define GET_STRING_OK 0
    #define GET_STRING_INVALID_PARAM 1
    #define GET_STRING_SMALL_BUFFER 2
    #define GET_STRING_EMPTY_DATA 3
         int get_string(char **buf, size_t buf_len)
              size_t copy_len;
              int ret = GET_STRING_OK;
              if (buf == NULL) {
                   cerr << "Invalid input buffer to get_string" << endl;
                   return GET_STRING_INVALID_PARAM;
              // make sure the string is null terminated.
              memset(*buf, 0, buf_len);
              // if there is no string, just return.
              if (get_data() == NULL || get_size() == 0)
                   return GET_STRING_OK;
              if (get_size() >= buf_len) {
                   ret = GET_STRING_SMALL_BUFFER;
                   copy_len = buf_len - 1; // save room for a terminator.
              } else
                   copy_len = get_size();
              memcpy(*buf, get_data(), copy_len);
              return ret;
         size_t get_string_length()
              if (get_size() == 0)
                   return 0;
              return strlen((char *)get_data());
         void set_string(char *string)
              set_data(string);
              set_size((u_int32_t)strlen(string));
         StringDbt(char *string) :
         Dbt(string, (u_int32_t)strlen(string)) {};
         StringDbt() : Dbt() {};
         ~StringDbt() {};
         // Don't add extra data to this sub-class since we want it to remain
         // compatible with Dbt objects created internally by Berkeley DB.
    Db *g_repquote=NULL;
    RepQuoteExample::RepQuoteExample() : app_config(0), cur_env(0) {
         app_data.app_finished = 0;
         app_data.in_client_sync = 0;
         app_data.is_master = 0; // assume I start out as client
         app_data.no_dummy_wr = 0 ; //prevent to run dummy write
    int (*old_rep_process_message)
              __P((DB_ENV *, DBT *, DBT *, int, DB_LSN *));
    int my_rep_process_message __P((DB_ENV arg1, DBT arg2, DBT arg3, int arg4, DB_LSN arg5))
         printf("EZ->>> my_rep_process_message:%p\n",arg5);
         old_rep_process_message(arg1,arg2,arg3,arg4,arg5);
    void RepQuoteExample::init(RepConfigInfo *config) {
         app_config = config;
         cur_env.set_app_private(&app_data);
         cur_env.set_errfile(stderr);
         app_data.no_dummy_wr=config->no_dummy_wr;
         if (app_data.no_dummy_wr)
              printf("No dummy !!!\n");
         //EZ->cur_env.set_errpfx(progname);
         cur_env.set_event_notify(event_callback);
         // Configure bulk transfer to send groups of records to clients
         // in a single network transfer. This is useful for master sites
         // and clients participating in client-to-client synchronization.
         if (app_config->bulk)
              cur_env.rep_set_config(DB_REP_CONF_BULK, 1);
         // Set the total number of sites in the replication group.
         // This is used by repmgr internal election processing.
         if (app_config->totalsites > 0)
              cur_env.rep_set_nsites(app_config->totalsites);
         // Turn on debugging and informational output if requested.
         if (app_config->verbose)
              cur_env.set_verbose(DB_VERB_REPLICATION, 1);
         cur_env.set_verbose(DB_VERB_REPMGR_MISC, 1);
         cur_env.set_verbose(DB_VERB_RECOVERY, 1);
         cur_env.set_verbose(DB_VERB_REPLICATION, 1);
         cur_env.set_verbose(DB_VERB_REP_ELECT, 1);
         cur_env.set_verbose(DB_VERB_REP_LEASE, 1);
         cur_env.set_verbose(DB_VERB_REP_SYNC, 1);
         cur_env.set_verbose(DB_VERB_REPMGR_MISC, 1);
         // Set replication group election priority for this environment.
         // An election first selects the site with the most recent log
         // records as the new master. If multiple sites have the most
         // recent log records, the site with the highest priority value
         // is selected as master.
         cur_env.rep_set_priority(app_config->priority);
         // Set the policy that determines how master and client sites
         // handle acknowledgement of replication messages needed for
         // permanent records. The default policy of "quorum" requires only
         // a quorum of electable peers sufficient to ensure a permanent
         // record remains durable if an election is held. The "all" option
         // requires all clients to acknowledge a permanent replication
         // message instead.
         cur_env.repmgr_set_ack_policy(app_config->ack_policy);
         // Set the threshold for the minimum and maximum time the client
         // waits before requesting retransmission of a missing message.
         // Base these values on the performance and load characteristics
         // of the master and client host platforms as well as the round
         // trip message time.
         cur_env.rep_set_request(20000, 500000);
         // Configure deadlock detection to ensure that any deadlocks
         // are broken by having one of the conflicting lock requests
         // rejected. DB_LOCK_DEFAULT uses the lock policy specified
         // at environment creation time or DB_LOCK_RANDOM if none was
         // specified.
         cur_env.set_lk_detect(DB_LOCK_DEFAULT);
         // The following base replication features may also be useful to your
         // application. See Berkeley DB documentation for more details.
         // - Master leases: Provide stricter consistency for data reads
         // on a master site.
         // - Timeouts: Customize the amount of time Berkeley DB waits
         // for such things as an election to be concluded or a master
         // lease to be granted.
         // - Delayed client synchronization: Manage the master site's
         // resources by spreading out resource-intensive client
         // synchronizations.
         // - Blocked client operations: Return immediately with an error
         // instead of waiting indefinitely if a client operation is
         // blocked by an ongoing client synchronization.
         cur_env.repmgr_set_local_site(app_config->this_host.host,
         app_config->this_host.port, 0);
         for ( REP_HOST_INFO *cur = app_config->other_hosts; cur != NULL;
              cur = cur->next) {
              cur_env.repmgr_add_remote_site(cur->host, cur->port,
              NULL, cur->peer ? DB_REPMGR_PEER : 0);
         // Configure heartbeat timeouts so that repmgr monitors the
         // health of the TCP connection. Master sites broadcast a heartbeat
         // at the frequency specified by the DB_REP_HEARTBEAT_SEND timeout.
         // Client sites wait for message activity the length of the
         // DB_REP_HEARTBEAT_MONITOR timeout before concluding that the
         // connection to the master is lost. The DB_REP_HEARTBEAT_MONITOR
         // timeout should be longer than the DB_REP_HEARTBEAT_SEND timeout.
         cur_env.rep_set_timeout(DB_REP_HEARTBEAT_SEND, 5000000);
         cur_env.rep_set_timeout(DB_REP_HEARTBEAT_MONITOR, 10000000);
         // The following repmgr features may also be useful to your
         // application. See Berkeley DB documentation for more details.
         // - Two-site strict majority rule - In a two-site replication
         // group, require both sites to be available to elect a new
         // master.
         // - Timeouts - Customize the amount of time repmgr waits
         // for such things as waiting for acknowledgements or attempting
         // to reconnect to other sites.
         // - Site list - return a list of sites currently known to repmgr.
         // We can now open our environment, although we're not ready to
         // begin replicating. However, we want to have a dbenv around
         // so that we can send it into any of our message handlers.
         cur_env.set_cachesize(0, CACHESIZE, 0);
         cur_env.set_flags(DB_REP_PERMANENT, 1);
         //cur_env.set_flags(DB_TXN_WRITE_NOSYNC, 1);
    /*     u_int32_t maxlocks=300000;
         if (maxlocks != 0)
              cur_env.set_lk_max_locks(maxlocks);
         u_int32_t maxlocks_o=300000;
         if (maxlocks_o != 0)
              cur_env.set_lk_max_objects(maxlocks_o);
         u_int32_t maxmutex=300000;
         if (maxmutex != 0)
              cur_env.mutex_set_max(maxmutex);
         DbEnv          *m_env=&cur_env;
         m_env->set_flags(DB_TXN_NOSYNC, 1);
         m_env->set_lk_max_lockers(60000);
         m_env->set_lk_max_objects(60000);
         m_env->set_lk_max_locks(60000);
         m_env->set_tx_max(60000);
         //m_env->repmgr_set_ack_policy(DB_REPMGR_ACKS_NONE);
         m_env->rep_set_timeout(DB_REP_ACK_TIMEOUT, 50 * 1000); //50ms
         m_env->rep_set_timeout(DB_REP_CHECKPOINT_DELAY, 0);
         //m_env->rep_set_timeout(DB_REP_CONNECTION_RETRY, 30 * 1000 * 1000); // 30 seconds
         m_env->rep_set_timeout(DB_REP_ELECTION_TIMEOUT, 1 * 1000 * 1000); // 5 seconds
         m_env->rep_set_timeout(DB_REP_FULL_ELECTION_TIMEOUT, 5 * 1000 * 1000); // 5 seconds
         m_env->rep_set_timeout(DB_REP_CONNECTION_RETRY, 5 * 1000 * 1000);
         //m_env->rep_set_timeout(DB_REP_ELECTION_RETRY, 10 * 1000 * 1000); //10 seconds
         //m_env->rep_set_timeout(DB_REP_HEARTBEAT_MONITOR, 80 * 1000 * 1000); //80 seconds
         //m_env->rep_set_timeout(DB_REP_HEARTBEAT_SEND, 500 * 1000); //500 milli seconds
         //The minimum number of microseconds a client waits before requesting retransmission
         u_int32_t rep_req_min = 40000; //40 000 microsec = 40 mili
         //The maximum number of microseconds a client waits before requesting retransmission
         u_int32_t rep_req_max = 1280000;// 1 280 000 microsec = 1.28 sec
         u_int32_t rep_limit_gbytes = 0;
         u_int32_t rep_limit_bytes = 100 * 1024 * 1024; // 100MB
         m_env->rep_set_request(rep_req_min, rep_req_max);
         m_env->rep_set_limit(rep_limit_gbytes, rep_limit_bytes);
         cur_env.open(app_config->home, DB_CREATE | DB_RECOVER |
         DB_THREAD | DB_INIT_REP | DB_INIT_LOCK | DB_INIT_LOG |
         DB_INIT_MPOOL | DB_INIT_TXN , 0);
         //keep old function for chain
         //old_rep_process_message=cur_env.get_DB_ENV()->rep_process_message;
         //derouting
         //cur_env.get_DB_ENV()->rep_process_message=my_rep_process_message;
         /*int _i;
         cur_env.log_get_config(DB_LOG_DIRECT, &_i);printf ("DB_LOG_DIRECT = %d\n",_i);
         cur_env.log_get_config(DB_LOG_DSYNC, &_i);printf ("DB_LOG_DSYNC = %d\n",_i);
         cur_env.log_get_config(DB_LOG_AUTO_REMOVE, &_i);printf ("DB_LOG_AUTO_REMOVE = %d\n",_i);
         cur_env.log_get_config(DB_LOG_IN_MEMORY, &_i);printf ("DB_LOG_IN_MEMORY = %d\n",_i);
         cur_env.log_get_config(DB_LOG_ZERO,&_i);printf ("DB_LOG_ZERO = %d\n",_i);
         // Start checkpoint and log archive support threads.
         (void)thread_create(&ckp_thr, NULL, checkpoint_thread, &cur_env);
         (void)thread_create(&lga_thr, NULL, log_archive_thread, &cur_env);
         (void)thread_create(&dmy_thr, NULL, dummy_write_thread, &cur_env);
         cur_env.repmgr_start(3, app_config->start_policy);
    }

    int RepQuoteExample::terminate() {
         try {
              // Wait for checkpoint and log archive threads to finish.
              // Windows does not allow NULL pointer for exit code variable.
              thread_exit_status_t exstat;
              (void)thread_join(lga_thr, &exstat);
              (void)thread_join(ckp_thr, &exstat);
              (void)thread_join(dmy_thr, &exstat);
              // We have used the DB_TXN_NOSYNC environment flag for
              // improved performance without the usual sacrifice of
              // transactional durability, as discussed in the
              // "Transactional guarantees" page of the Reference
              // Guide: if one replication site crashes, we can
              // expect the data to exist at another site. However,
              // in case we shut down all sites gracefully, we push
              // out the end of the log here so that the most
              // recent transactions don't mysteriously disappear.
              cur_env.log_flush(NULL);
              cur_env.close(0);
         } catch (DbException dbe) {
              cout << "error closing environment: " << dbe.what() << endl;
         return 0;
    void RepQuoteExample::prompt() {
         cout << "QUOTESERVER";
         if (!app_data.is_master)
              cout << "(read-only)";
         cout << "> " << flush;
    void log(const char *msg) {
    time_t currentTime;
    // get and print the current time
    time (&currentTime); // fill now with the current time
         char buff[255];
         strncpy(buff,ctime(&currentTime),sizeof(buff));
         char *p;
         for(p =buff ; *p != '\n'; p++);
         *p = '\0';
         cerr << buff << " - " << msg << endl;
    // Simple command-line user interface:
    // - enter "<stock symbol> <price>" to insert or update a record in the
    //     database;
    // - just press Return (i.e., blank input line) to print out the contents of
    //     the database;
    // - enter "quit" or "exit" to quit.
    void RepQuoteExample::doloop() {
         DbHolder dbh1(&cur_env,DATABASE);
         DbHolder dbh2(&cur_env,DATABASE2);
         DbHolder *dbh=&dbh1;
         DbTxn *txn;
         string input;
    bool truncate = false;
         char *c;
         using_history();
         g_repquote=*dbh;
         int loop_rate = 0;
         int txn_rate = 500;
         while (prompt(), /*getline(cin, input)*/c=readline(NULL)) {
              input=std::string(c);
              add_history(c);
              free(c);
              int start_loop = 0;
              int end_loop = 0;
              int start_loop_d = 0;
              int end_loop_d = 0;
              istringstream is(input);
              string token1, token2, token3;
    truncate = false;
    start_loop = 0;
    end_loop = 0;
              // Read 0, 1 or 2 tokens from the input.
              int count = 0;
              if (is >> token1) {
                   count++;
                   if (is >> token2)
                   count++;
                   if (is >> token3)
                   count++;
              if (count == 1) {
         if (token1 == "truncate" ) {
                        truncate = true;     
                   else if (token1 == "env" ){
                        print_env(&cur_env);
                        continue;
         else if (token1 == "verbose" ) {
                        app_config->verbose = !app_config->verbose;
                        if (app_config->verbose)
                             cur_env.set_verbose(DB_VERB_REPLICATION, 1);
                             cur_env.set_verbose(DB_VERB_REPMGR_MISC, 1);
                             cur_env.set_verbose(DB_VERB_RECOVERY, 1);
                             cur_env.set_verbose(DB_VERB_REP_ELECT, 1);
                             cur_env.set_verbose(DB_VERB_REP_LEASE, 1);
                             cur_env.set_verbose(DB_VERB_REP_SYNC, 1);
                             cur_env.set_verbose(DB_VERB_REPMGR_MISC, 1);
                             log("verbose is on");
                        else
                             cur_env.set_verbose(DB_VERB_REPLICATION, 0);
                             cur_env.set_verbose(DB_VERB_REPMGR_MISC, 0);
                             cur_env.set_verbose(DB_VERB_RECOVERY, 0);
                             cur_env.set_verbose(DB_VERB_REP_ELECT, 0);
                             cur_env.set_verbose(DB_VERB_REP_LEASE, 0);
                             cur_env.set_verbose(DB_VERB_REP_SYNC, 0);
                             cur_env.set_verbose(DB_VERB_REPMGR_MISC, 0);
                             log("verbose is off");
                        continue;
         else if (token1 == "print" ) {
                   print_stocks(*dbh);
                        count = 0;      
         else if (token1 == "db1" ) {
                        dbh=&dbh1;
                        g_repquote=*dbh;
                        log( "switch to Db1");
                        count = 0;      
         else if (token1 == "db2" ) {
                        dbh=&dbh2;
                        g_repquote=*dbh;
                        log( "switch to Db2");
                        count = 0;      
                   else if (token1 == "exit" || token1 == "quit") {
                        app_data.app_finished = 1;
                        break;
                   } else {
                        log("Format: <stock> <price>");
                        continue;
    else if (count == 2)
                   if (token1 == "loop_rate" ){
         loop_rate = atoi(token2.c_str());
                        continue;
                   if (token1 == "txn_rate" ){
         txn_rate = atoi(token2.c_str());
                        continue;
    else if (count == 3)
    if (token1 == "loop" ) {
    start_loop = atoi(token2.c_str());
    end_loop = start_loop + atoi(token3.c_str());
    if (token1 == "delete" ) {
    start_loop_d = atoi(token2.c_str());
    end_loop_d = start_loop_d + atoi(token3.c_str());
              // Here we know count is either 0 or 2, so we're about to try a
              // DB operation.
              // Open database with DB_CREATE only if this is a master
              // database. A client database uses polling to attempt
              // to open the database without DB_CREATE until it is
              // successful.
              // This DB_CREATE polling logic can be simplified under
              // some circumstances. For example, if the application can
              // be sure a database is already there, it would never need
              // to open it with DB_CREATE.
              if (!dbh->ensure_open(app_data.is_master))
                   continue;
              try {
                   if (count == 0)
                        if (app_data.in_client_sync)
                             log( "Cannot read data during client initialization - please try again.");
                        else
                             print_stocks_size(*dbh);
                   else if (!app_data.is_master)
                        log("Can't update at client");
                   else {
                        if (truncate)
    u_int32_t no_remove;
                        txn = NULL;
    cur_env.txn_begin(NULL, &txn, DB_TXN_NOWAIT);
                             try
              (*dbh)->truncate(txn, &no_remove, 0);
    // commit
    txn->commit(0);
    txn = NULL;
    } catch (DbException &e) {
    std::cout << "Error on txn commit: " << e.what() << std::endl;
                        //     } catch (DbDeadlockException &) {
                        if (txn != NULL)
                             (void)txn->abort();
    // std::cout << "Error on txn commit: " << std::endl;
    else if (start_loop)
    int j=0;
    for (int i=start_loop; i<=end_loop; i=i+txn_rate)
    //transaction begin
                   txn = NULL;
                   cur_env.txn_begin(NULL, &txn, 0);
    for (j=i; j<=end_loop && j<=(i+txn_rate); j++)
                                  Dbt key, value;
         std::string key1, value1;
         std::stringstream sstrm;
         sstrm << "key" << j << ends;
         key1 = sstrm.str();
                   key.set_data((void *)key1.c_str());
                   key.set_size((u_int32_t)strlen(key1.c_str()));
         sstrm.str("");
         int payload = rand() + j;
                                  sstrm << "price" << payload << ends;
         value1 = sstrm.str();
                   value.set_data((void *)value1.c_str());
                   value.set_size((u_int32_t)strlen(value1.c_str()));
         // Perform the database put
         (*dbh)->put(txn, &key, &value, 0);
                             printf("Kill me !!\n");
                             kill(getpid(),-9);
                             exit(0);
         try
                                  // commit
                        txn->commit(0);
                        txn = NULL;
                   } catch (DbException &e) {
                        std::cout << "Error on txn commit: " << e.what() << std::endl;
                             if (loop_rate>0)
                                  usleep(txn_rate * 1000 * 1000 / loop_rate);
                        else if (start_loop_d)
    int j=0;
    for (int i=start_loop_d; i<=end_loop_d; i=i+100)
    //transaction begin
                   txn = NULL;
                   cur_env.txn_begin(NULL, &txn, 0);
    for (j=i; j<=end_loop_d && j<=(i+100); j++)
                                  Dbt key, value;
         std::string key1, value1;
         std::stringstream sstrm;
         sstrm << "key" << j << ends;
         key1 = sstrm.str();
                   key.set_data((void *)key1.c_str());
                   key.set_size((u_int32_t)strlen(key1.c_str()));
         // Perform the database put
         (*dbh)->del(txn, &key, 0);
         try
                                  // commit
                        txn->commit(0);
                        txn = NULL;
                   } catch (DbException &e) {
                        std::cout << "Error on txn commit: " << e.what() << std::endl;
                        else
                             const char *symbol = token1.c_str();
                             StringDbt key(const_cast<char*>(symbol));
                             const char *price = token2.c_str();
                             StringDbt data(const_cast<char*>(price));
                             (*dbh)->put(NULL, &key, &data, 0);
              } catch (DbDeadlockException e) {
                   log("please retry the operation");
                   dbh->close();
              } catch (DbRepHandleDeadException e) {
                   log("please retry the operation");
                   dbh->close();
              } catch (DbException e) {
                   if (e.get_errno() == DB_REP_LOCKOUT) {
                   log("please retry the operation");
                   dbh->close();
                   } else
                   throw;
         dbh->close();
    void RepQuoteExample::event_callback(DbEnv* dbenv, u_int32_t which, void *info)
         static char buf[256];
         APP_DATA app = (APP_DATA)dbenv->get_app_private();
         info = NULL;          /* Currently unused. */
         switch (which) {
         case DB_EVENT_REP_CLIENT:
              app->is_master = 0;
              app->in_client_sync = 1;
              sprintf(buf,"%s - %s",progname,"CLIENT");
              //EZ->dbenv->set_errpfx(buf);
              log("DB_EVENT_REP_CLIENT.");
              break;
         case DB_EVENT_REP_MASTER:
              app->is_master = 1;
              app->in_client_sync = 0;
              sprintf(buf,"%s - %s",progname,"MASTER");
              //EZ->dbenv->set_errpfx(buf);
              log("DB_EVENT_REP_MASTER.");
              break;
         case DB_EVENT_REP_NEWMASTER:
              log("DB_EVENT_REP_NEWMASTER.");
              app->in_client_sync = 1;
              break;
         case DB_EVENT_REP_PERM_FAILED:
              // Did not get enough acks to guarantee transaction
              // durability based on the configured ack policy. This
              // transaction will be flushed to the master site's
              // local disk storage for durability.
              log("DB_EVENT_REP_PERM_FAILED.");
              log("Insufficient acknowledgements to guarantee transaction durability.");
              break;
         case DB_EVENT_REP_STARTUPDONE:
              app->in_client_sync = 0;
              log("DB_EVENT_REP_STARTUPDONE.");
              break;
         case DB_EVENT_REP_ELECTION_FAILED:
              log("DB_EVENT_REP_ELECTION_FAILED.");
              //g_runner->init(g_config);
              printf("Kill me !!\n");
              kill(getpid(),-9);
              exit(0);
              break;
         case DB_EVENT_REP_DUPMASTER:
              log("DB_EVENT_REP_DUPMASTER.");
              break;
         default:
              dbenv->errx("ignoring event %d", which);
    void RepQuoteExample::print_stocks_size(Db *dbp) {
         DB_BTREE_STAT *statp;
    dbp->stat(NULL, &statp, 0);
         log("db_stat");
    cout << "***************************************** >>>>>>>>>>> : database contains " << (u_long)statp->bt_ndata << " records\n";
    void RepQuoteExample::print_env(DbEnv *dbenv) {
         dbenv->stat_print(DB_STAT_ALL);
    void RepQuoteExample::print_stocks(Db *dbp) {
         StringDbt key, data;
    #define     MAXKEYSIZE     10
    #define     MAXDATASIZE     20
         char keybuf[MAXKEYSIZE + 1], databuf[MAXDATASIZE + 1];
         char kbuf, dbuf;
         memset(&key, 0, sizeof(key));
         memset(&data, 0, sizeof(data));
         kbuf = keybuf;
         dbuf = databuf;
         DbcAuto dbc(dbp, 0, 0);
         cout << "\tSymbol\tPrice" << endl
              << "\t======\t=====" << endl;
    int no_records =0;
         for (int ret = dbc->get(&key, &data, DB_FIRST);
              ret == 0;
              ret = dbc->get(&key, &data, DB_NEXT)) {
              key.get_string(&kbuf, MAXKEYSIZE);
              data.get_string(&dbuf, MAXDATASIZE);
    no_records++;
              cout << "\t" << keybuf << "\t" << databuf << endl;
    cout << "********************** NO Records " << no_records << endl;
         cout << endl << flush;
         dbc.close();
    static void usage() {
         cerr << "usage: " << progname << " -h home -l host:port [-CM]"
         << "[-r host:port][-R host:port]" << endl
         << " [-a all|quorum][-b][-n nsites][-p priority][-v]" << endl;
         cerr << "\t -h home (required; h stands for home directory)" << endl
         << "\t -l host:port (required; l stands for local)" << endl
         << "\t -C or -M (optional; start up as client or master)" << endl
         << "\t -r host:port (optional; r stands for remote; any "
         << "number of these" << endl
         << "\t may be specified)" << endl
         << "\t -R host:port (optional; R stands for remote peer; only "
         << "one of" << endl
         << "\t these may be specified)" << endl
         << "\t -a all|quorum (optional; a stands for ack policy)" << endl
         << "\t -b (optional; b stands for bulk)" << endl
         << "\t -n nsites (optional; number of sites in replication "
         << "group; defaults " << endl
         << "\t     to 0 to try to dynamically compute nsites)" << endl
         << "\t -p priority (optional; defaults to 100)" << endl
         << "\t -v (optional; v stands for verbose)" << endl;
         exit(EXIT_FAILURE);
    int main(int argc, char **argv) {
         RepConfigInfo config;
         char ch, portstr, tmphost;
         int tmpport;
         bool tmppeer;
         config.no_dummy_wr = false;
         // Extract the command line parameters
         while ((ch = getopt(argc, argv, "E:a:bCh:l:Mn:p:R:r:vw")) != EOF) {
              tmppeer = false;
              switch (ch) {
              case 'a':
                   if (strncmp(optarg, "all", 3) == 0)
                        config.ack_policy = DB_REPMGR_ACKS_ALL;
                   else if (strncmp(optarg, "quorum", 6) != 0)
                        usage();
                   break;
              case 'b':
                   config.bulk = true;
                   break;
              case 'C':
                   config.start_policy = DB_REP_CLIENT;
                   break;
              case 'E':
    config.start_policy = DB_REP_ELECTION;
    break;
              case 'h':
                   config.home = optarg;
                   break;
              case 'l':
                   config.this_host.host = strtok(optarg, ":");
                   if ((portstr = strtok(NULL, ":")) == NULL) {
                        cerr << "Bad host specification." << endl;
                        usage();
                   config.this_host.port = (unsigned short)atoi(portstr);
                   config.got_listen_address = true;
                   break;
              case 'M':
                   config.start_policy = DB_REP_MASTER;
                   break;
              case 'n':
                   config.totalsites = atoi(optarg);
                   break;
              case 'p':
                   config.priority = atoi(optarg);
                   break;
              case 'R':
                   tmppeer = true; // FALLTHROUGH
              case 'r':
                   tmphost = strtok(optarg, ":");
                   if ((portstr = strtok(NULL, ":")) == NULL) {
                        cerr << "Bad host specification." << endl;
                        usage();
                   tmpport = (unsigned short)atoi(portstr);
                   config.addOtherHost(tmphost, tmpport, tmppeer);
                   break;
              case 'v':
                   config.verbose = true;
                   break;
              case 'w':
                   config.no_dummy_wr = true;
                   //config.priority = 2;
                   break;
              case '?':
              default:
                   usage();
         // Error check command line.
         if ((!config.got_listen_address) || config.home == NULL)
              usage();
         RepQuoteExample runner;
         g_runner=&runner;
         g_config=&config;
         try {
              runner.init(&config);
              runner.doloop();
         } catch (DbException dbe) {
              cerr << "Caught an exception during initialization or"
                   << " processing: " << dbe.what() << endl;
         runner.terminate();
         return 0;
    // This is a very simple thread that performs checkpoints at a fixed
    // time interval. For a master site, the time interval is one minute
    // plus the duration of the checkpoint_delay timeout (30 seconds by
    // default.) For a client site, the time interval is one minute.
    void checkpoint_thread(void args)
         DbEnv *env;
         APP_DATA *app;
         int i, ret;
         env = (DbEnv *)args;
         app = (APP_DATA *)env->get_app_private();
         for (;;) {
              // Wait for one minute, polling once per second to see if
              // application has finished. When application has finished,
              // terminate this thread.
              for (i = 0; i < 60; i++) {
                   sleep(1);
                   if (app->app_finished == 1)
                        return ((void *)EXIT_SUCCESS);
              // Perform a checkpoint.
              // original line
              if ((ret = env->txn_checkpoint(0, 0, 0)) != 0) {
              //if ((ret = env->txn_checkpoint(0, 0, DB_FORCE)) != 0) {
                   env->err(ret, "Could not perform checkpoint.\n");
                   return ((void *)EXIT_FAILURE);
    // This is a simple log archive thread. Once per minute, it removes all but
    // the most recent 3 logs that are safe to remove according to a call to
    // DBENV->log_archive().
    // Log cleanup is needed to conserve disk space, but aggressive log cleanup
    // can cause more frequent client initializations if a client lags too far
    // behind the current master. This can happen in the event of a slow client,
    // a network partition, or a new master that has not kept as many logs as the
    // previous master.
    // The approach in this routine balances the need to mitigate against a
    // lagging client by keeping a few more of the most recent unneeded logs
    // with the need to conserve disk space by regularly cleaning up log files.
    // Use of automatic log removal (DBENV->log_set_config() DB_LOG_AUTO_REMOVE
    // flag) is not recommended for replication due to the risk of frequent
    // client initializations.
    void log_archive_thread(void args)
         DbEnv *env;
         APP_DATA *app;
         char **begin, **list;
         int i, listlen, logs_to_keep, minlog, ret;
         env = (DbEnv *)args;
         app = (APP_DATA *)env->get_app_private();
         logs_to_keep = 3;
         for (;;) {
              // Wait for one minute, polling once per second to see if
              // application has finished. When application has finished,
              // terminate this thread.
              for (i = 0; i < 60; i++) {
                   sleep(1);
                   if (app->app_finished == 1)
                        return ((void *)EXIT_SUCCESS);
              // Get the list of unneeded log files.
              if ((ret = env->log_archive(&list, DB_ARCH_ABS)) != 0) {
                   env->err(ret, "Could not get log archive list.");
                   return ((void *)EXIT_FAILURE);
              if (list != NULL) {
                   listlen = 0;
                   // Get the number of logs in the list.
                   for (begin = list; *begin != NULL; begin++, listlen++);
                   // Remove all but the logs_to_keep most recent
                   // unneeded log files.
                   minlog = listlen - logs_to_keep;
                   for (begin = list, i= 0; i < minlog; list++, i++) {
                        if ((ret = unlink(*list)) != 0) {
                             env->err(ret,
                             "logclean: remove %s", *list);
                             env->errx(
                             "logclean: Error remove %s", *list);
                             free(begin);
                             return ((void *)EXIT_FAILURE);
                   free(begin);
    #define DATABASE_DUMMY "dummy.db"
    void create_dummy_db(DB_ENV env, DB *dbp)
    DB_ENV *dbenv=env;
    int ret;
    u_int32_t db_flags;
    if ((ret = db_create(dbp, dbenv, 0)) != 0)
    dbenv->err(dbenv, ret, "create_dummy_db: db_create");
    db_flags = DB_AUTO_COMMIT | DB_CREATE;
    //if ((ret = (*dbp)->open(*dbp,NULL, DATABASE, NULL, DB_BTREE, db_flags, 0)) != 0)
    if ((ret = (*dbp)->open(*dbp,NULL, NULL, DATABASE_DUMMY, DB_BTREE, db_flags, 0)) != 0)
    dbenv->err(dbenv, ret, "create_dummy_db: DB->open");
    void reopen_dummy_db(DB_ENV env, DB *dbp)
    DB_ENV *dbenv=env;
    int ret;
    u_int32_t db_flags;
    if ((ret = db_create(dbp, dbenv, 0)) != 0)
    dbenv->err(dbenv, ret, "create_dummy_db: db_create");
    db_flags = DB_AUTO_COMMIT | DB_CREATE;
    //if ((ret = (*dbp)->open(*dbp,NULL, DATABASE, NULL, DB_BTREE, db_flags, 0)) != 0)
    if ((ret = (*dbp)->open(*dbp,NULL, NULL, DATABASE_DUMMY, DB_BTREE, db_flags, 0)) != 0)
    dbenv->err(dbenv, ret, "reopen_dummy_db: DB->open");
    void perform_db_operation(DB_ENV env, DB *dbp, bool bRead)
    //main loop
    //DB *dbp=NULL;
    DB_ENV *dbenv=env;
    int ret;
    u_int32_t db_flags;
    DBT key, data;
    char buf[20]="dummy", *rbuf;
    rbuf=buf;
    if (*dbp == NULL)
    create_dummy_db(dbenv, dbp);
    if (! bRead)
         memset(&key, 0, sizeof(key));
         memset(&data, 0, sizeof(data));
         key.data = buf;
         key.size = (u_int32_t)strlen(buf);
         data.data = rbuf;
         data.size = (u_int32_t)strlen(rbuf);
         if ((ret = (*dbp)->put(*dbp, NULL, &key, &data, 0)) != 0)
              if (ret == DB_REP_HANDLE_DEAD)
                   //create_dummy_db(dbenv, dbp);
                   reopen_dummy_db(dbenv, dbp);
                   (*dbp)->err(*dbp, ret, "DB->put :");
              else
              if (ret != DB_KEYEXIST)
                   (*dbp)->err(*dbp, ret, "perform_db_operation: DB->put");
         else
              DB_BTREE_STAT *statp;
              (*dbp)->stat(*dbp,NULL, &statp, 0);
              std::cout<<"dbp read stats: key#"<< statp->bt_nkeys <<std::endl;
    void dummy_write_thread(void args)
         DbEnv *env;
         APP_DATA *app;
         char **begin, **list;
         int i, listlen, logs_to_keep, minlog, ret;
         DB *m_dbp; // a pointer
         env = (DbEnv *)args;
         app = (APP_DATA *)env->get_app_private();
         logs_to_keep = 3;
         for (;;) {
              if (! app->no_dummy_wr)
                   if (app->is_master)
                   perform_db_operation(env->get_DB_ENV(),&m_dbp,false);
                        //env->txn_checkpoint(0, 0, DB_FORCE);
              usleep(1 * 1000 * 1000);
              else
                   if (app->is_master)
                        //DB *db_quote=g_repquote->get_DB();
                        //perform_db_operation(env->get_DB_ENV(),&db_quote,true);
                        //if (g_repquote)
                        //     g_runner->print_stocks_size(g_repquote);
                        //env->txn_checkpoint(0, 0, DB_FORCE);
                        //perform_db_operation(env->get_DB_ENV(),&m_dbp,false);
                        env->rep_flush();
              usleep(4 * 1000 * 1000);
    my script to simulate the split brain
    #!/bin/sh
    [ -z "$node1" ] && node1=10.10.32.121
    [ -z "$node2" ] && node2=10.10.32.91
    trap myend 0 1 2 3 6 9 14 15
    myend()
         echo "Receive signal to stop test..."
         un_split_brain
         echo "done"
         exit 1
    split_brain()
         echo -n "Split-Brain at node $node..."
         snmpset -m ALL -v 2c -c svil 10.10.0.100 ifAdminStatus.41 i 2 >/dev/null 2>&1
         echo "done"
    un_split_brain()
         echo -n "Undo Split-Brain at node $node..."
         snmpset -m ALL -v 2c -c svil 10.10.0.100 ifAdminStatus.41 i 1 >/dev/null 2>&1
         echo "done"
    is_slave()
         local r=$(ssh root@$1 "tail -2 /tmp/BDB.log" | grep -c CLIENT)
         [ $r -gt 1 ] && ret=1 || ret=0
         return $ret
    is_master()
         local r=$(ssh root@$1 "tail -2 /tmp/BDB.log" | grep -c MASTER)
         [ $r -gt 1 ] && ret=1 || ret=0
         return $ret
    wait_for_master()
         echo -n "Waiting for MASTER at node $node ... "
         is_master $node
         r=$?
         while ( [ ! $r -eq 1 ] )
         do
         usleep 500000
         is_master $node
         r=$?
         echo -n "."
         done
         echo "done"
    wait_for_slave()
         local r
         local tm
         tm=0
         echo -n "Waiting for SLAVE at node $node ... "
         is_slave $node
         r=$?
         while ( [ ! $r -eq 1 ] )
         do
              usleep 500000
              is_slave $node
              r=$?
              echo -n "."
              tm=$((tm+1))
              [ $tm -gt 120 ] && break
         done
         [ $tm -gt 120 ] && ret=0 || ret=1
         echo "done"
         return $ret
    run_test_split_brain()
         local nt
         nt=1
         nfails=0
         x=4
         [ -z "$1" ] && node=$node2
         while ((1))
         do
              printf "*************** TEST [%02d] ********************\n" $nt
              split_brain
              wait_for_master
              x=$((RANDOM%9))
              echo -n " waiting $x sec ..."
              sleep $x
              echo "done"
              un_split_brain
              wait_for_slave
              r=$?
              [ ! $r -eq 1 ] && echo "`date` - test [$nt] - fails ..." || echo "`date` - test [$nt] - OK ."
              [ ! $r -eq 1 ] && nfails=$((nfails+1))
              perc_failure=$(echo "100.0 - $nfails / $nt * 100.0" | bc -l)
              echo "************************************************ [% Success test $perc_failure % ]"
              nt=$((nt+1))
              x=$((RANDOM%9))
              echo -n " waiting $x sec ..."
              sleep $x
         done
    run_test_split_brain
    here is the makefile to run to two environments
    i run:
    - make run
    and in another window sh test_split_brain.sh
    node1?=10.10.32.121
    node2?=10.10.32.91
    nsite?=2
    debug?=0
    all: RepQuoteExampleEric install
    RepConfigInfo.o: RepConfigInfo.cpp RepConfigInfo.h
         g++ -I/usr/local/BerkeleyDB.5.1/include/ -g -O0 -c RepConfigInfo.cpp -o RepConfigInfo.o
    RepQuoteExampleEric: RepQuoteExampleEric.cpp RepConfigInfo.o
         g++ -I/usr/local/BerkeleyDB.5.1/include/ -g -O0 RepQuoteExampleEric.cpp RepConfigInfo.o -o RepQuoteExampleEric -L /usr/local/BerkeleyDB.5.1/lib/ -lreadline -lcurses -ldb_cxx
    kill:
         -ssh -X root@$(node1) "killall -9 /root/RepQuoteExampleEric"
         -ssh -X root@$(node2) "killall -9 /root/RepQuoteExampleEric"
    run: RepQuoteExampleEric kill install clean_env
         ssh -X root@$(node1) "xterm -geom 100x20+100+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.110:12345 -r 2.0.0.210:12345 -a quorum -b -n $(nsite) -v | tee /tmp/BDB.log\"" &
         ssh -X root@$(node2) "xterm -geom 100x20+800+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.210:12345 -r 2.0.0.110:12345 -a quorum -b -n $(nsite) -v -w | tee /tmp/BDB.log\"" &
    run_node2: clean_env2
         ssh -X root@$(node2) "xterm -geom 100x20+800+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.210:12345 -r 2.0.0.110:12345 -a quorum -b -n $(nsite) -v -w | tee /tmp/BDB.log\"" &
    debug_node2: clean_env2
         ssh -X root@$(node2) "xterm -geom 100x20+800+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.210:12345 -r 2.0.0.110:12345 -a quorum -b -n $(nsite) -v -w | tee /tmp/BDB.log\"" &
         sleep 3
         ssh -X root@$(node2) /sbin/pidof RepQuoteExampleEric >/tmp/pid
         ssh -X root@$(node2) ~/kdbg /root/db-5.1.19/examples/cxx/excxx_repquote/RepQuoteExampleEric -p `cat /tmp/pid`
    run_debug_node1: RepQuoteExampleEric kill install clean_env
         ssh -X root@$(node1) "xterm -geom 100x20+100+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/kdbg /root/RepQuoteExampleEric\" " &
         ssh -X root@$(node2) "xterm -geom 100x20+800+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.210:12345 -r 2.0.0.110:12345 -a quorum -b -n $(nsite) -v\"" &
    run_debug_node2: RepQuoteExampleEric kill install clean_env
         ssh -X root@$(node1) "xterm -geom 100x20+100+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/RepQuoteExampleEric -h /opt/bdb/ -l 2.0.0.110:12345 -r 2.0.0.210:12345 -a quorum -b -n $(nsite) -v\" " &
         ssh -X root@$(node2) "xterm -geom 100x20+800+100 -e \"LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib/ /root/kdbg /root/RepQuoteExampleEric\"" &
    install: RepQuoteExampleEric
         scp RepQuoteExampleEric root@$(node1):~
         scp RepQuoteExampleEric root@$(node2):~
    clean_env: clean_env1 clean_env2
    clean_env1:
         ssh -X root@$(node1) rm -rf /opt/bdb/*
    clean_env2:
         ssh -X root@$(node2) rm -rf /opt/bdb/*

  • Two Threshold Analog to Digital

    I was asked to develop some code that would take a signal in on an analog, then convert it to a digital, then perform frequency, duty cycle, and signal integrity testing on it.  The built in NI functions for performing these tasks were insufficient because we needed to be able to detect a single drop out of a cycle.  With a real world signal I realize there maybe noise and a having a single threshold to convert from a analog to a digital may show transitions that aren't there and so I planned on developing some kind of debounce code.
    Instead someone mentioned using two thresholds, one for the low and one for the high, and to only consider the signal transitioning if it goes above the high, after going below the low.  
    Attached is my attempt at that method.  This VI simulates a sine wave with a bunch of noise then does a single theshold to show how imperfect it can be.  Then using that same signal it does a two level theshold which works much better but has a slight shift in the time domain, and the beginning will contain unknown values because neither transition has occured with the first sample.
    Any pointers or suggestions to improve my implementation is appreciated.  Thanks.
    EDIT: This does use an OpenG function from the Array palette.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.
    Solved!
    Go to Solution.
    Attachments:
    Test AI to Digital With 2 Levels.vi ‏72 KB

    Why so many loops when you just need one?
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Test AI to Digital With 2 Levels.png ‏76 KB

  • Peak detection in a xy graph

    Hi,
    I have plotted the spectral data in XY graph indicator using a .txt file. Now I need to detect the peaks present in this spectrum, their locations and amplitude. I have used Peak Detector .vi in the code. However, it is not generating relevant peaks but displaying approximated values with wrong locations(x values). Is there a better and accurate way to detect the peaks? Please refer attachments for the code.
    Let me know.
    Thank you.
    Attachments:
    data1.txt ‏28 KB
    read_spectrum_1.vi ‏62 KB

    Have you read the LabVIEW Help file (when looking at the BD, type Ctrl-H, then go to the "Detailed help" at the bottom of the popup) about the peak detection function?
    "Locations contains the index locations of all peaks or valleys detected in the current block of data. Because the peak detection algorithm uses a quadratic fit to find the peaks, it actually interpolates between the data points. Therefore, the indexes are not integers. In other words, the peaks found are not necessarily actual points in the input data but may be at fractions of an index and at amplitudes not found in the input array.
    To view the locations in terms of time, use the following equation.
    Time Locations[i] = t0 + dt*Locations[i] ...
    ... This Peak Detector VI is based on an algorithm that fits a quadratic polynomial to sequential groups of data points. The number of data points used in the fit is specified by width."
    Your input data is so noisy that getting as results the actual points from this function that visually seem to be "peaks" is virtually impossible. Not to mention that there is such an offset at the begining of the data that you are getting gazillions of false positives here. You really need to clean things up before you let LabVIEW loose (at least the Peak Detector VI) on it if you want meaningful answers.
    BTW, setting "the threshold slightly below [7379.18]" would not necessarily result in a peak depending on your definition of "slightly," because your data has no points below 180.041, making the highest possible peak 7199.139.
    Cameron
    To err is human, but to really foul it up requires a computer.
    The optimist believes we are in the best of all possible worlds - the pessimist fears this is true.
    Profanity is the one language all programmers know best.
    An expert is someone who has made all the possible mistakes.
    To learn something about LabVIEW at no extra cost, work the online LabVIEW tutorial(s):
    LabVIEW Unit 1 - Getting Started
    Learn to Use LabVIEW with MyDAQ

  • Problem in peak detection

    Respected sir/madam,
    i am actually processing human radial artery pulse signal... i acquired that signal and linearised that by using filter...
    there is a problem in peak amplitude detection and location...by manually we can say there is more number of peaks ..but peak detector shows only 25 peak's amplitude and their locations...i can't understand about locations that shows..i attached the data which is in .lvm extension and the vi
    please help me to rectify the problem..
    thank you
    Attachments:
    Untitled 2.vi ‏100 KB
    Untitled 2.vi ‏100 KB

    Once again the Dynamic Data Type (DDT) generated by the Express VIs hides the real problem.
    The data segments are too short to reliably detect the peaks - not every segment has a peak and some peaks may be partly in one segment and partly in the next. By combining all the data into one waveform before filtering and peak detecting, it is possible to get reliable matching between the "eyeball" peak detector and the software peak detector.
    Note that the large transients at the beginning of the data set are detected as peaks. These transients are due to the real transient in the data and the transients of the filters. I did not attempt to remove them. You could use Array Subset with either manual selection of the end of the transient or some automated process based on the larger amplitude and lower frequency of the transient compared to the real signal.
    There is  a lot of "stuff" in this VI - things I tried and did not remove and multiple ways of doing things.  The enabled diagrm of the Diagram Disable structure has a constant with the data from your file so that I did not need to read and process the file repeatedly while working on the filters and peak detector.
    Comments in no particular order regarding what I did and how the posted VI works.
    1. The DDT data from Read from Measurement File.vi is converted to an array of waveforms. The Waveform data type is well docmuented and the internal data structure is readliy accessible.
    2. Each element of the array of waveforms from Read from Measurement File.vi is appended to the corresponding element from the previous iteration to form one array of waveforms containing all the data in the file. This is displayed on Array of Waveform, Array of Waveform 4, and Signals total. Note that these are graphs, not charts.  The data is also put into 2D arrays as Array of Waveform 2 and Array of Waveform 3.
    3. I do not have Advanced Peak Detector PtByPt.vi, but I think the data is not truly point by point so this may be a poor choice.
    4. I used the standard filter VIs (rather than Express VIs) to stay away from DDT. The outputs are slightly different, but the Express VIs do not give you complete control or knowledge of what the filter setup is. I used Butterworth filters for both filters and adjusted the cutoff frequencies slightly to get similar waveforms.
    5. I used the standard Peak Detector.vi from the Signal Processing >> Signal Operations palette. It reliably finds 104 peaks over a wide range of widths and some variation in thresholds.  That count includes the transients at the beginning as mentioned above.
    The .2 VI contains all the junk code I described above. The .3 VI is a cleaned up version.
    Lynn
    Attachments:
    Untitled 2-9.2.vi ‏760 KB
    Untitled 2-9.3.vi ‏183 KB

  • Peak Detect

    Hello,
    I am trying to make a heart rate monitor using LabVIEW and arduino. The basic hardware uses an LED and a photocell. You place your finger on the LED, with the photocell above, and the photocell detects changes in the amount of light in your finger whenever your heart beats.
    My design work okay and is able to detect changes in light. However, I am having trouble calculating those changes and converting the data into a heart rate. One of the problems is the peak detect, into which you set a threshold value and the detector looks for amplitudes above or below that value. The problem is, the amount of light coming through one's finger varies wildly, so even though a pulse is clearly observable, you can't just set a value for the threshold to look for.
    I tried to solve this by making the threshold vary, so that it would be slightly below the average, however my code still fails to detect a pulse. Does anyone have any idea how I could resolve this issue, or if there is a better method than using the default peak detect?
    I have attached a copy of my code, and would be very grateful if anyone could provide input. Thank you very much!
    Solved!
    Go to Solution.
    Attachments:
    A-LV HR Prototype.vi ‏982 KB

    For example, here it detected a significant drop in light intensity 6 times over a 6 second period, so BPM should be 60, but it doesn't display anything.
    Other times the BPM shoots up to 200 in a second then just stays there. I can't figure out why, or if there's a better way to do it.
    Attachments:
    HR Graph.png ‏122 KB

  • 2 Channel peak detection and value look up

    Hi all,
    I have 2 different sin waves going into channel 1 and channel 2 of an oscilloscope. What I'm trying to do is:
    1. From Channel 1's input determine the time when peak values occur and save them to an array
    2. Then find out what channel 2's voltage output is at times determined from step 1. 
    I started doing this project by only using 1 channel where I was able to extract the y data as well as plug it into peak detection.vi. 
    However after I merge the two channels together and proceed like I did with 1 channel  labview shows disconnected wires due to data incompatability. 
    Any help would be greately appreciated on how I can locate channel 1's peak times and plug those time values to determine output values for channel 2.
    Attached is
    1. The single channel peak detection (which works)
    2. 2 channel peak detection (the .vi I need help on)
    Attachments:
    Single Channel Peak Detection.vi ‏33 KB
    2 channel peak detection.vi ‏22 KB

    The VI is broken because you are connecting an array of waveforms with VI's that are asking for a single waveform.  You need to use Index Array to break out one or the other waveform to feed to the other VI's.

  • Simple peak detect

    I have a waveform that I need to detect peaks & valleys on. I use the peak detector VI.
    Attached is my VI with an indicative array of values that I will
    encounter. The peak detector correctly finds the 3 peaks but doesn't
    find the 2 valleys in the middle - I don't undertand why.
    Those are the 2 I'm most interested in  - the first & last are just tails of the waveform.
    Any help would be greatly appreciated,
    thanks
    ak
    Attachments:
    peak detect.vi ‏22 KB

    If I open your vi and set the peak threshold and valley threshold to 100, the peaks and valleys seem to be found.
    Randall Pursley

  • Peak detection( take 100 ms samples?)

    Hi...
    I have a problem using peak detection under Analysis=>waveform monitoring.
    I want to get the real data (ECG data), and detect the peak of the data. Then i want to store the index of the peak into array so that i can measure the time difference between two peaks.
    my problem is that when i use peak detection, i cannot get the result as above mentioned. Before this i have tried to test the peak detector and found that if it will detect the peak for 100 ms time interval only. SO if i have waveform with f=100 Hz, then it will detect only 10 peaks..Is it correct that this vi takes 100 ms sample for detection? *correct me if i'm wrong
    And since i want to get the real data from ECG hardware, is there any method to measure index of the occurence of all peaks and store it into array? (not only for 100 ms)
    -I saw the formula in peak detector help :
    To view the locations in terms of time, use the following equation.
    Time Locations[i] = t0 + dt*Locations[i]
    What does [i] here represent?
    Thanks for the help...
    Regards,
    Rismi *newbie...:d

    Hi...
    Thanks, now i can understand about peak detection for simulate signal...
    Then if i use this vi to detect peak of ECG data(data from DAQ or recorded data)..This vi will detect the locations of peak of current block of data. what "current block of data" in this case represents..??
    i attach my simple program to detect the index of peak of my data. But i cannot get my desired result. And when i detect the number of the peak, the values is either 0 or 1. I don't know how to fix this problems.Can anyone give me advice?
    Attachments:
    retrieveSunday22.vi ‏161 KB

Maybe you are looking for

  • How to identify the Open Schedule Lines.

    Hi All, I am writing an ABAP report which requires to identify the OPEN SALES ORDER SCHEDULE LINES. The requirement is to identify the Open Sales Order Schedule Lines for a particular month (user Processing month in the selection screen).  Here the S

  • Change layout button in ALV Grids

    Hi Experts, I have displayed 2 ALV grids in a single screen using OOPS concepts and I have activated the Save Layout option for both the Lists. Now whenever I will save the layout for the 1st list then automatically it is taken into consideration for

  • CE 7.2  - VC Error (evaluation)

    Hi I am trying to build the VC model in CE 7.2 in NWDS, I am getting following error, could you please suggest?   [vcgen] Visual Composer ANT Task invoked successfully!      [vcgen] ERROR: XGL file missing for de.sap.temp.34372261d7c211df9bc900215a2a

  • Why there is spaces between Web App module listings on a web page?

    Why there is spaces between Web App module listings on a web page? Here is the web page: http://www.mazine.com/test/testpage.htm And here is the CSS code: http://www.mazine.com/test/css/tablestyle.css

  • Need to extract Post to inspection stock field from material master

    Hi Guys, I need to extract the Post to insp.stock indicator from material master QM view for all FERT materials, i have tried with MARC table but no use. Is there any alternative way to download this from the material master?, kindly help me resolve