Elapsed time between 2 points
Hello,
I'm trying to measure the time between 2 datapoints.
When the data acquirement begins the time should be saved and when the signal reaches 90% of it's max.
Those 2 times then get subtracted and then you have the elapsed time.
But I'm not quite sure how to do this... I was thinking with flat sequences.
Solved!
Go to Solution.
Are you constantly capturing a signal? What is your sample rate?
What you need to do first to capture the signal. You can then find the maximum value with the Array Max & Min function. Calculate your 90% (0.9*Max). Search the data until your get greater or equal to the 90% mark. Get that sampling number. Your time is then number of samples to the 90% mark divided by the sample rate (in Samples/Second).
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
Similar Messages
-
When my process stops, I am reading an array of tags(datapoints) and writing the max and average to memory tags for data logging. However, when viewing the data, the elapsed time between cycles spreads the data out unevenly. It could be 90 seconds between cycles or maybe two hours or longer. Is there a way to convert the time axis data to be just consecutive datapoints?? It would be like logging data based on a particular condition happening rather than time-based trending. Should I try to use the data set logger examples instead?? I would prefer to use the built-in datalogging features rather than writing to databases.
You could export your data to a spreadsheet file and then actually write then again in a second database using this example program in the devzone
http://zone.ni.com/devzone/conceptd.nsf/webmain/5A921A403438390F86256B9700809A53?opendocument
Using this program (if you don't want to modify it, which would take a reasonable amount of time specially if you are not familiar with VI-Based Server) You would have to generate a collum in your spreadsheet file to be the timestamp, it would be a artificial timestamp.
What you could do in your application is to first save the data to file and then read from file, substitute the collum timestamp for the "artificial one" and then write it to the database, again, with that you would not need to modify this program.
However if you have the time and is willing to work with VI-based server you could try to modify the example program to be adapted for your purposes.
I hope it helps
Good Luck
Andre Oliveira
Applications Engineer
National Instruments -
- Using LabView version 6.1, is there anyway to change the "Time Between Points" indicator of (HH.MM.SS) to only (mm.ss), or to perhaps only (.ss) ?
- Need to set the data sampling rate to capture every 5 milliseconds, but the defaults is always to 20 or greater; even when the "Small Loop Delay" variable is adjusted down.
Thank you in advance.I have no idea what "Time between Points, and Small Loop Delay tools" is. If this is some code you downloaded, you should provide a linke to it. And, if you want to acquire analog data every 5 milliseconds from a DAQ board, that is possible with just about every DAQ board and is not related to the version of LabVIEW. You simply have to set the sample rate of the DAQ board to 200 samples/sec. If it's digital data, then there will be a problem getting consistent 5 msec data.
-
How to find the elapsed time between 2 events ?
I want to use the robot class and I have register all events in a file but when I use the Robot. It isn't synchronous. So I should find th elapsed time between two events for reproducing them with the method delay of the class Robot.
ThanksIt sounds like you want to reproduce the events with varying time between the events? This is a good idea for a couple of reasons 1) it's possible to enqueue events so quickly that Robot gets ahead of the Java GUI, and 2) every human does their mouse/keyboard activity at varying (and relatively slow, relative to the computer that is) rates of speed. It would help make a more realistic interaction to vary the speed of event reproduction.
Since you can't control the amount of time between the call to Robot and when the event arrives in the application, you can't be ultimately precise about this. You'll have to accept a teensy bit of slop in the process and just live with it.
That is ... you can do something like robot.method1() .. Thread.sleep(1) .. robot.method2() .. Thread.sleep(1) ... and that will give slight delays. If you vary the value for Thread.sleep you can vary the event reproduction speed. Early in the development of Robot I experimented with something like this - I set up an EventQueue listener to capture all mouse/keyboard events (remember that each comes with a timestamp) and then reproduced events using the time intervals from the captured events to control the Thread.sleep times between Robot calls. It worked pretty well, and the mouse would dance around in the same way I moved the mouse around.
- David -
To calculate elapsed time between two timestamp attributes
Hello,
I have two timestamp attributes (create_tmstmp & elapsed_tmstmp) in a table.
It has two rows and I need the difference in seconds between these two attributes.
The below query is returning negative value and incorrect value.
Any help is appreciated.
Row 1:
Create_tmstmp : 2/2/2010 9:53:15.832 PM
Elapsed_tmstmp : 2/3/2010 9:49:47.527 AM
Row 2:
Create_tmstmp : 2/3/2010 5:35:47.614 AM
Elapsed_tmstmp : 2/3/2010 11:03:15.937 AM
Select
( (extract(day from elapsed_tmstmp )-extract(day from create_tmstmp))*86400+
(extract(hour from elapsed_tmstmp )-extract(hour from create_tmstmp))*3600+
(extract(minute from elapsed_tmstmp)-extract(minute from create_tmstmp))*60+
(extract(second from elapsed_tmstmp)-extract(second from create_tmstmp))*1000) completed_tmstmp
From table_a;
The output is :
completedtmstmp_
74655
-11997
Thanks.
Edited by: solsam on Feb 4, 2010 11:57 AMhi,
The problem with cast to date is
SQL> select to_char(cast(to_timestamp('2/2/2010 9:53:15.832 PM','mm/dd/yyyy hh:mi:ss.ff pm') as dat
e),'mm/dd/yyyy hh:mi:ss pm') from dual
2 ;
TO_CHAR(CAST(TO_TIMEST
02/02/2010 09:53:16 pm
so cast rounds it up, resulting in:
SELECT (
CAST (TO_TIMESTAMP ('2/2/2010 9:53:15.832 PM', 'mm/dd/yyyy hh:mi:ss.ff pm') AS DATE)
- CAST (TO_TIMESTAMP ('2/2/2010 9:53:14.432 PM', 'mm/dd/yyyy hh:mi:ss.ff pm') AS DATE)
* 86400 x
FROM DUAL
X
2
A more exact answer would use, to_char and to_date to convert :
SELECT (
TO_DATE (TO_CHAR (TO_TIMESTAMP ('2/2/2010 9:53:15.832 PM', 'mm/dd/yyyy hh:mi:ss.ff pm'),
'mm/dd/yyyy hh:mi:ss pm'),
'mm/dd/yyyy hh:mi:ss pm')
- TO_DATE (TO_CHAR (TO_TIMESTAMP ('2/2/2010 9:53:14.432 PM', 'mm/dd/yyyy hh:mi:ss.ff pm'),
'mm/dd/yyyy hh:mi:ss pm'),
'mm/dd/yyyy hh:mi:ss pm')
* 86400 x
FROM DUAL
X
1
Edit:
Massimo Ruocchio's answer actually works better.
-AC
Edited by: user12026137 on Feb 9, 2010 12:45 PM -
I have a program that monitors a pressure reading from a transducer. The initial readings are logged at 10 second intervals until the pressure is increased by a set value and the logging interval increases to 100 points per second. Currently, when I graph the data, the elapsed time on the x-axis is inaccurate because the computer doesn't know when the 10 second stopped and the 100 points per second began. Is there a way to keep track of the time the points are being logged? Some sort of method of having the elapsed time between points logged so this can be used to create the x-axis?
Thanks in Advance!!
LabVIEW 2012 - Windows 7
CLADHello,
It sounds like you'd like to have two plots on the same graph, but the x-axis scale and values are different for your two plots. You can control how data is plotted to a waveform graph by bundling the array of data you are plotting with an x0 (initial value) and a delta-x (distance between samples). The order for the bundle (into a cluster of course) should be, x0 value, delta-x value, y-array of data.
I have attached an example to illustrate this by allowing you to change the x0 and delta-x for two plots dynamically (the Generate Sine Pattern.vi is used as a subVI, where the other is the top level VI); I think this is exactly what you are looking for!
I hope this helps!
Best Regards,
JLS
Best,
JLS
Sixclear
Attachments:
Example1.zip 28 KB -
Calculating elapsed time (in minutes) from a TIMESTAMP field in a table
I have a table of events that contains a field, CREATION_DATE, that is defined as NOT NULL TIMESTAMP(6), and the value is normally in the format 'DD-MON-YYYY HH:MM:SS.FFFFFF xM'. I want to SELECT only those rows where the elapsed time between the CREATION_DATE and the local time (US/EASTERN) is greater than 15 minutes. And, I'd like the output to show the values for the elapsed time in days, hours, minutes, and seconds.
Edited by: user12301147 on Dec 2, 2009 9:16 AMHi,
It looks like to have to adjust for time zones.
I'm not sure what you mean by "2009-DEC-02 13:59:34.316, which is in actual time 2009-DEC-02 08:59:34.316." Aren't both actual times, just in different time zones? I'm also a lttle confused about the results: even if it's treating 6 hours 18 minutes as 0 hours 18 minutes, either way it's over 15 minutes.
Convert either creation_date or SYSTIMESTAMP to the time zone of the other. (It looks like SYSTIMESTAMP is indeed Eastern Time, UTC - 5:00).
Or, if they are always a fixed time apart, build that difference into the comparison. for example:
WHERE SYSTIMESTAMP - creation_date > TO_DSINTERVAL ('-0 05:45:00') -
Date Utility - calc age, elapsed time...
I need to calculate elapsed time between 2 dates, for example calc age based on DOB and Today. There must be a utility out there somewhere (free!). I'm sure I'm not the first one with this requirement.
try this
import java.util.*;
public class DateSub {
public static void main(String [] args) {
Calendar cal = Calendar.getInstance();
cal.set (1970,0,1);
long age = cal.getTime().getTime();
age = System.currentTimeMillis() - age;
System.out.println ("age = " + age + " milliseconds" );
age /= 1000;
System.out.println ("age = " + age + " seconds" );
age /= (3600*24);
System.out.println ("age = " + age + " days" );
long years = age/(365);
long days = age % 365;
System.out.println ("age = " + years + " years " + days + " days");
} -
Hi,
Oracle version: 8.1.7.4
I run DBMS_STATS.GATHER_SCHEMA_STATS (OwnName => 'TOM') and now I need the elapsed time between a table and another.
for example in dba_tables I have:
TABLE_NAME LAST_ANALYZED
TAB1 2012/10/01 18:00:00
TAB2 2012/10/01 19:00:00
TAB3 2012/10/01 19:30:00
TAB4 2012/10/01 19:40:20
...........................................I'd like to get this output: TABLE_NAME_FIRST TABLE_NAME_SECOND ELAPSED_TIME
TAB1 TAB2 1 h
TAB2 TAB3 30 min
TAB3 TAB4 10 min - 20 sec. Have anyone how can i write this query?
Thanks in advance!Oracle version: 8.1.7.4You're in luck, the analytic function you need is available in your version :
SQL> select table_name as table1
2 , lead(table_name) over(order by last_analyzed) as table2
3 , numtodsinterval(
4 lead(last_analyzed) over(order by last_analyzed)
5 - last_analyzed
6 , 'DAY'
7 ) as elapsed
8 from user_tables
9 where table_name not like 'SYS%'
10 ;
TABLE1 TABLE2 ELAPSED
TEST_DATA XML_DOCUMENT_TABLE +000000001 00:00:03.000000000
XML_DOCUMENT_TABLE PKT_MASTER +000000005 23:59:56.000000000
PKT_MASTER TEST2 +000000000 00:00:00.000000000
TEST2 EVAL_IDX_TAB +000000000 11:28:04.000000000
EVAL_IDX_TAB STRUCTURED_INDEX_TEST +000000000 12:32:02.000000000
STRUCTURED_INDEX_TEST PATHS +000000000 00:00:00.000000000
PATHS XML_T2_PATH_TABLE +000000000 11:50:17.000000000
[...] -
Diffrence between cpu and elapse time in tkprof
Hi All
i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
>call count cpu elapsed disk query current rows
==================================================
Parse 1 0.12 1.36 2 11 0 0
Execute 1 14.30 720.20 46548 190520 205 100
Fetch 0 0.00 0.00 0 0 0 0
======================================================
total 2 14.42 721.56 46550 190531 205 100
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times waited Max. Wait Total Waited
===========================================
db file sequential read 46544 0.49 632.12
db file scattered read 1 0.00 0.00
my select statement
SELECT cst.customer_id> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
> ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
> ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
>
> FROM ar_receivable_applications_all ra
> ,ar_cash_receipts_all cr
> ,ar_payment_schedules_all ps
> ,zz_ar_customer_summary_all cst
> WHERE ra.cash_receipt_id = cr.cash_receipt_id
> AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
> AND ra.status = 'APP'
> AND ra.display = 'Y'
> AND ra.applied_payment_schedule_id = ps.payment_schedule_id
> AND ps.customer_id = cst.customer_id
> AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
> group by cst.customer_id ;
Thanks,
Anuuser653066 wrote:
Hi All
i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
call count cpu elapsed disk query current rows
================================================================================
Parse 1 0.12 1.36 2 11 0 0
Execute 1 14.30 720.20 46548 190520 205 100
Fetch 0 0.00 0.00 0 0 0 0
================================================================================
total 2 14.42 721.56 46550 190531 205 100
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times waited Max. Wait Total Waited
===========================================================================
db file sequential read 46544 0.49 632.12
db file scattered read 1 0.00 0.00
SELECT cst.customer_id
,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
FROM ar_receivable_applications_all ra
,ar_cash_receipts_all cr
,ar_payment_schedules_all ps
,zz_ar_customer_summary_all cst
WHERE ra.cash_receipt_id = cr.cash_receipt_id
AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
AND ra.status = 'APP'
AND ra.display = 'Y'
AND ra.applied_payment_schedule_id = ps.payment_schedule_id
AND ps.customer_id = cst.customer_id
AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
group by cst.customer_id ; Toon Koppelaars seems to have pinpointed the problem. Where are the 74 seconds unaccounted for seconds (I might have calculated it incorrectly, but I arrived at 88.08 seconds of unaccounted for time: 721.56 total - 1.36 parse - 632.12 db file sequential reads)?
It is interesting that the maximum wait for a single block read reported by TKPROF is 0.49 seconds - this might be an indication of excessive competition for the server's CPU - processes are waiting in the CPU run queue, and therefore not on the CPU. As Toon indicated, 632.12 of the 721.56 seconds were spent waiting for single block reads to complete with 46,544 blocks read. Note also that the query executed at dep=1, and TKPROF may be providing misleading information about what actually happened during those executions. An example of misleading information:
CREATE TABLE T11 (
C1 NUMBER,
C2 VARCHAR2(30));
CREATE TABLE T12 (
C1 NUMBER,
C2 VARCHAR2(30));
CREATE TABLE T13 (
C1 NUMBER,
C2 VARCHAR2(30));
CREATE TABLE T14 (
C1 NUMBER,
C2 VARCHAR2(30));
CREATE OR REPLACE TRIGGER HPM_T11 AFTER
INSERT OR DELETE OR UPDATE OF C1 ON T11
REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO T12
SELECT
ROWNUM,
DBMS_RANDOM.STRING('A',25)
FROM
DUAL
CONNECT BY
LEVEL <= 100;
END IF;
END;
CREATE OR REPLACE TRIGGER HPM_T12 AFTER
INSERT OR DELETE OR UPDATE OF C1 ON T12
REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO T13
SELECT
ROWNUM,
DBMS_RANDOM.STRING('A',25)
FROM
DUAL
CONNECT BY
LEVEL <= 100;
END IF;
END;
CREATE OR REPLACE TRIGGER HPM_T13 AFTER
INSERT OR DELETE OR UPDATE OF C1 ON T13
REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO T14
SELECT
ROWNUM,
DBMS_RANDOM.STRING('A',25)
FROM
DUAL
CONNECT BY
LEVEL <= 100;
END IF;
END;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_FIND_ME2';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
SET TIMING ON
INSERT INTO T11 VALUES (1,'MY LITTLE TEST CASE');
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';The partial TKPROF output:
INSERT INTO T11
VALUES
(1,'MY LITTLE TEST CASE')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 8 0 0
Execute 1 0.00 0.00 0 9788 29 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 9796 29 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56
Rows Row Source Operation
0 LOAD TABLE CONVENTIONAL (cr=9788 pr=7 pw=0 time=0 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID : 6asaf110fgaqg
INSERT INTO T12 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
BY LEVEL <= 100
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.04 0.09 0 2 130 100
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.04 0.09 0 2 130 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56 (recursive depth: 1)
Rows Row Source Operation
0 LOAD TABLE CONVENTIONAL (cr=9754 pr=7 pw=0 time=0 us)
100 COUNT (cr=0 pr=0 pw=0 time=0 us)
100 CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
SQL ID : db46bkvy509w4
INSERT INTO T13 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
BY LEVEL <= 100
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 100 1.31 1.27 0 93 10634 10000
Fetch 0 0.00 0.00 0 0 0 0
total 101 1.31 1.27 0 93 10634 10000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56 (recursive depth: 2)
Rows Row Source Operation
0 LOAD TABLE CONVENTIONAL (cr=164 pr=0 pw=0 time=0 us)
100 COUNT (cr=0 pr=0 pw=0 time=0 us)
100 CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
SQL ID : 6542yyk084rpu
INSERT INTO T14 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
BY LEVEL <= 100
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 10001 41.60 41.84 0 8961 52859 1000000
Fetch 0 0.00 0.00 0 0 0 0
total 10003 41.60 41.84 0 8961 52859 1000000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56 (recursive depth: 3)
Rows Row Source Operation
0 LOAD TABLE CONVENTIONAL (cr=2 pr=0 pw=0 time=0 us)
100 COUNT (cr=0 pr=0 pw=0 time=0 us)
100 CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log file switch completion 2 0.07 0.07
********************************************************************************In the above note that the "INSERT INTO T11" is reported as completing in 0 seconds, but it actually required roughly 42 seconds - and that would be visible by manually reviewing the resulting trace file. Also note that the log file switch completion wait was not reported for the "INSERT INTO T11" even though it impacted the execution time.
Back to the possibility of CPU starvation causing lost time. Another test with an otherwise idle server, followed by a second test with the same server having 240 other processes fighting for CPU resources (a simulated load).
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_NO_LOAD';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
SET TIMING ON
SELECT
COUNT(*)
FROM
T14;
SELECT
SYSDATE
FROM
DUAL;
SQL> SELECT
2 COUNT(*)
3 FROM
4 T14;
COUNT(*)
1000000
Elapsed: 00:00:01.37With no load the COUNT(*) completed in 1.37 seconds. The TKPROF output looks like this:
SQL ID : gy8nw9xzyg3bj
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
NVL(SUM(C2),:"SYS_B_1")
FROM
(SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
:"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
:"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.01 0.84 523 172 1 1
total 3 0.01 0.84 523 172 1 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56 (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=172 pr=523 pw=0 time=0 us)
8733 TABLE ACCESS SAMPLE T14 (cr=172 pr=523 pw=0 time=0 us cost=2 size=12 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.02 0.04
db file parallel read 1 0.31 0.31
db file scattered read 52 0.03 0.47
SQL ID : 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
spare1, spare2, avgcln
from
hist_head$ where obj#=:1 and intcol#=:2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.06 2 2 0 0
total 3 0.00 0.06 2 2 0 0
Misses in library cache during parse: 0
Optimizer mode: RULE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=2 pr=2 pw=0 time=0 us)
0 INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=2 pw=0 time=0 us)(object id 413)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2 0.02 0.04
SELECT
COUNT(*)
FROM
T14
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 1 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.43 6558 6983 0 1
total 4 0.03 0.44 6559 6984 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56
Rows Row Source Operation
1 SORT AGGREGATE (cr=6983 pr=6558 pw=0 time=0 us)
1000000 TABLE ACCESS FULL T14 (cr=6983 pr=6558 pw=0 time=0 us cost=1916 size=0 card=976987)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.02 0.02
SQL*Net message to client 2 0.00 0.00
db file scattered read 111 0.02 0.38
SQL*Net message from client 2 0.00 0.00Note that TKPROF reported that it only required 0.44 seconds for the query to execute while the SQL*Plus timing indicate that it required 1.37 seconds for the SQL statement to execute. The SQL optimization (parse) with dynamic sampling query accounted for the remaining time, yet TKPROF provided no indication that this was the case.
Now the query with 240 other processes competing for CPU time:
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_WITH_LOAD';
SELECT COUNT(*) FROM T14;
SELECT
SYSDATE
FROM
DUAL;
SQL> SELECT COUNT(*) FROM T14;
COUNT(*)
1000000
Elapsed: 00:00:59.03The query this time required just over 59 seconds. The TKPROF output:
SQL ID : gy8nw9xzyg3bj
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
NVL(SUM(C2),:"SYS_B_1")
FROM
(SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
:"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
:"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.28 423 69 0 1
total 3 0.00 0.28 423 69 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 56 (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=69 pr=423 pw=0 time=0 us)
8733 TABLE ACCESS SAMPLE T14 (cr=69 pr=423 pw=0 time=0 us cost=2 size=12 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 54 0.01 0.27
db file sequential read 2 0.00 0.00
SQL ID : 7h04kxpa13w1x
SELECT COUNT(*)
FROM
T14
call count cpu elapsed disk query current rows
Parse 1 0.00 0.03 1 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.06 58.71 6551 6983 0 1
total 4 0.06 58.74 6552 6984 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56
Rows Row Source Operation
1 SORT AGGREGATE (cr=6983 pr=6551 pw=0 time=0 us)
1000000 TABLE ACCESS FULL T14 (cr=6983 pr=6551 pw=0 time=0 us cost=1916 size=0 card=976987)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.02 0.02
SQL*Net message to client 2 0.00 0.00
db file scattered read 110 1.54 58.59
SQL*Net message from client 1 0.00 0.00Note in the above that the max wait for the db file scattered read is 1.54 seconds due to the extra CPU competition - about 3 times longer than your max wait for a single block read. On your database platform with single block reads, it might be possible that the time in the CPU run queue is not always counted in the db file sequential read wait time or the CPU wait time - what if your operating system is slow at returning timing information to the database instance due to CPU saturation - this might explain the 74 (or 88) lost seconds.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Edited by: Charles Hooper on Aug 28, 2009 10:26 AM
Fixing formatting problems -
Measuring time between activation of two boolean
Hello
I want to measure the time elapsed between activation of two boolean variables/indicators. In my code I have DAQ acquiring 2 digital signals which trigger corresponding 2 boolean indicators. Now I am trying to measure time between the event when my first boolean indicator gets the signal and the event when my second boolean gets the signal. I am attaching my code. I checked DAQ and my indicators everything works fine, but just when I try running the code with the 'elapsed time' part, its not running. Any help will be appreciated.
Thank You.
Attachments:
Test Code.vi 45 KBSo, feel free to give more detail. I'll just give you a couple thoughts:
1) The primary issue with your code is that it probably doesn't do what you think it does. How this VI will run is that it will get one data from the DAQmx routine, then pretty much stay in the inner while loop since the event structure has an indefinite timeout. Worse, I don't think changing the value of the boolean controls would trigger the value changed event anyways. In the end, you don't even need the Event Structure to do what you want to do, nor do you need the inner while loop. Just check the values for each iteration of the outer while loop instead and do what you need to do with those.
2) You are acquiring only one data point at a time. Which is OK if your events are particularly slow with respect to the effective sampling rate of the of loop (which might be able to pick up signals that change on the order of milliseconds or perhaps a bit less). If your signals are faster, you need to think about reading lots of samples at a time (arrays of booleans) and properly triggering your acquisition close to the event it needs to pick.
3) If your signal is a pulse, make sure that the pulse width of the signal is much longer than than 1 / sampling rate. If it isn't, you could miss the pulse entirely doing this approach.
4) The faster signals (as long as your have a quick enough sampling rate), the best thing is always to use a counters approach. In many of the DAQ cards and peripherals that NI offers, there are counters that can be driven with the sample clock (one of the fastest clocks on the device) or even a faster time base, and then the 'events' can be used to trigger acquisitions of the counters. Meaning you get exactly what you want: data points that correspond to times when events occur. -
Comparing two elapsed timer readings
Hi everyone,
I'm a new user of Labview, so I'm teaching myself as I go along really. I've come up against a problem though. I'm trying to compare two timer readings from elapsed timers. Eventually I am hoping to swap these timers for a pair of light beams inputted through a DAQ chassis. Initially though I wanted to get the content of my program correct, without having to have the equipment connected to my laptop.
I have attached my progress so far, apologies if my query has an obvious solution. As I say I'm a bit of an beginner! How would I take the two elasped timers log the times found off the timers, compare which one has a greater value and consequently start a new timer depending on this. I would then like to log this time in data form.
I've tried to compare the two elasped timers with greater than booleans etc but I cant seem to get it to work?
As I say I hope to use this idea in a time gate sort of application where depending on which light beam is broken first depends on which timer is started. Hope my explanation is clear.
Any help is much appreciated,
Thanks very much!
Attachments:
FYP.vi 15 KBPT-
Looking at your code you have a few problems that you will need to address. Read the LabVIEW help section titled "Caveats and Recommendations when Using Events in LabVIEW." You have some latching booleans controls that are incorrectly placed. The terminals should be in the event case that handles the value change event.
Why two event structures? One structure is a better Idea you can handle all your events with 1 loop (and avoid stopping the loops independantly- this can lead to a condition where the vi becomes unresponsive.
The lower loop compares the start times to two different timestamps. This is introducing potential errors since there is timing relationship between the calls to the system timer. and the lower case structures are rather insane- in english they resolve to "if started return how long its been since started else return 0" Check the NI website there is a CLD practice exam solution (Car Wash) that has a pretty good example and discussion of using the elapsed timer express vi. It might point you in a better direction than where you are going.
Why not use the time terminals in the event structure? thats the time the event fired. But, you might want the time the event was handled I don't Know- (hint-comment your code and write SOMETHING in the vi documentation. Eventually you'll be writing more complex code so start developing good habits now) (good habits--)
Come on back when you've looked at some of the examples and I'ld be glad to help you more.
Jeff -
LabVIEW/SignalExpress: How can I automate measuring the time between two pulses?
Hi everyone, bit of a newbie here so please bear with me.
I'm a student at a university conducting a muon decay experiment with an oscilloscope connected to some photomultipliers. To summarize, if a muon enters the detector it will create a very small width pulse (a few ns). Within a period of 10µs it may decay, creating a second pulse. The oscilloscope triggers on the main pulse 5-15 times per second, and a decay event happens roughly 1-2 times per minute. I am trying to collect 10 hours of data (roughly 1500-2000 decay events) and measure the time it takes for each decay.
I've been able to set recording conditions in SignalExpress that starts recording on the first pulse and stops recording on the last. The Tektronix TDS 1012 oscilloscope however feeds 2500 points of data from this snapshot into a text file (for use in excel or other software). Even if I perfectly collected the data, I would have 100,000+ data points and it would be too much to handle. I don't know how (or if it's possible) to reduce the sample size.
To conclude, using Labview or SignalExpress, I would like to be able to have the software
1. Differentiate between the single pulse detections and double pulse decay events
2. Record only when two pulses appear on the oscilloscope
3. Measure the time between these two pulses and ONLY that to minimize the amount of data recorded.
Any help would be GREATLY appreciated, thanks!Hi wdavis8,
I am not that familiar with Tektronix, but there should be a place in the dialog that you go through when you create the action step to acquire date to specify a sampling rate. That would allow you to reduce the number of data points you are seeing, but may reduce the quality of the data.
If it’s just a matter of that much data being hard to dig through when you have that many points, you could do some analysis on the data after the fact, and then create a new file with only the data you want to look at. For example, you could identify the peaks in the data, and based on the distance between them or the difference in magnitude, selectively write data to a new file.
Here is some information about peak detection in LabVIEW:
http://www.ni.com/white-paper/3770/en/
You could also do some downsampling on the data to get fewer data points:
https://decibel.ni.com/content/docs/DOC-23952
https://decibel.ni.com/content/docs/DOC-28976
Those are just a few quick ideas.
Kelsey J
Applications Engineer -
ORA-1555 ORA-3136 errors:: elapsed time vs Query Duration
Dear all,
- My Database version is 11.2.0.2, Solaris.
- We have been having a problem in the production database where the front end nodes start going up and down for couple of hours sometimes. ; When node flapping is going on we get connection timed out alerts.
WARNING: inbound connection timed out (ORA-3136) opiodr aborting
process unknown ospid (4342) as a result of ORA-609 opiodr aborting
process unknown ospid (4532) as a result of ORA-609 opiodr aborting
process unknown ospid (4534) as a result of ORA-609 opiodr aborting....
Since this week node flapping is happening every day. Since past 2 days after or during node flapping we are getting ORA-1555 error.
Extract from alert log error:
ORA-01555 caused by SQL statement below (SQL ID: g8804k5pkmtyt, Query Duration=19443 sec, SCN: 0x0001.07bd90ed):
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.parentDeviceId, d.created, d.lastModified AS devLastMod, d.customerId, d.userKey1, d.userKey2, d.userKey4, d
.userKey5, d.firmwareFamily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.userKey6, d.provisioningId, d.status, d.classification, d.population, d.name, d.ipRe
solver, d.ipExpirationTime, d.geoLocationId,contact.firstContactTime, ifaces.id, ifaces.type AS ifaceType, ifaces.lastModified AS ifaceLastMod, ifaces.timeoutname, ifac
es.username1, ifaces.password1, ifaces.username2, ifaces.password2, ifaces.connReqUrl, ifaces.connReqScheme, ifaces.srvNonce, ifaces.deviceNonce, ifaces.phoneNumber,ifa
ces.bootstrapSecMethod, ifaces.srvAuthentication, ifaces.deviceAuthentication, ifaces.userPIN, ifaces.networkID, ifaces.omaSessionID, ifaces.portNum, ifaces.mgtIp, ifac
es.cmtsIp, ifaces.mgtReadCommunity, ifaces.mgtWriteCommunity, ifaces.cmtsReadCommunity, ifaces.cmtsWriteCommunity, devto.name AS devtoName, devto.rebootTimeout, devto.sessionInitiationI run Statspack report from the whole day duration, and looking into the elapsed time in seconds no more than 3739.61 sec (too lower than run duration in the alert log file of 19443 sec); So I would like to know if there is any co-relations between the ORA-3136 errors and the ORA-1555 errors?
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
tTime <= :3 ) AND (endTime IS NULL OR endTime >= :4 )
2773.77 7,787,914 0.00 3.4 3739.61 112,671,645 1909376826
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.user
SQL> show parameter UNDO_MANAGEMENT
NAME TYPE VALUE
undo_management string AUTO
SQL> show parameter UNDO_RETENTION
NAME TYPE VALUE
undo_retention integer 10800BR,
DiegoThank you. Please let me know if it is enough or you need more information;
SQL ordered by Gets DB/Inst: DB01/db01 Snaps: 14835-14846
-> End Buffer Gets Threshold: 100000 Total Buffer Gets: 677,689,568
-> Captured SQL accounts for 73.6% of Total Buffer Gets
-> SQL reported below exceeded 1.0% of Total Buffer Gets
CPU Elapsd Old
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
21,286,248 2,632,793 8.1 3.4 666.73 666.76 3610154549
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.user
17,029,561 1,176,849 14.5 2.7 417.32 416.73 1909376826
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.user
17,006,795 37 459,643.1 2.7 367.61 368.95 4045552861
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.userAnother Statspack report for the whole day shows;
SQL ordered by CPU DB/Inst: DB01/db01 Snaps: 14822-14847
-> Total DB CPU (s): 82,134
-> Captured SQL accounts for 40.9% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
tTime <= :3 ) AND (endTime IS NULL OR endTime >= :4 )
2773.77 7,787,914 0.00 3.4 3739.61 112,671,645 1909376826
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.user
SQL ordered by Gets DB/Inst: DB01/db01 Snaps: 14822-14847
-> End Buffer Gets Threshold: 100000 Total Buffer Gets: 1,416,456,340
-> Captured SQL accounts for 55.8% of Total Buffer Gets
-> SQL reported below exceeded 1.0% of Total Buffer Gets
CPU Elapsd Old
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
86,354,963 7,834,326 11.0 6.3 2557.34 2604.08 906944860
Module: JDBC Thin Client
SELECT d.devId, d.vendor, d.model, d.productClass, d.oui, d.pare
ntDeviceId, d.created, d.lastModified AS devLastMod, d.customerI
d, d.userKey1, d.userKey2, d.userKey4, d.userKey5, d.firmwareFam
ily, d.softwareVer, d.serialNum, d.ip, d.mac, d.userKey3, d.user
.....BR,
Diego
Edited by: 899660 on 27-ene-2012 7:43
Edited by: 899660 on 27-ene-2012 7:45 -
How can I display the elapsed time of the course using Advanced Actions in Captivate?
I have a Captivate course which is approximately 35 minutes in length. On each slide I would like to display to the user, the current elapsed time.
EXAMPLE:
25/35 minutes complete
The 35 would remain static, so I have been working with the elapsed time system variable in CP: elapsed:$$cpInfoElapsedTimeMS$$
I can't seem to get the variable to properly display the elapsed time in minutes, rather than miliseconds. Attached is a screen shot of my advanced action.
Can anyone provide guidence regarding how I should structure this differntly?I talked about that Timer widget in that blog post and pointed to another one:
http://blog.lilybiri.com/timer-widget-to-stress-your-learners
If you are on CP7, you'll have this widget also as an interaction, which means it is compatible with HTML5 output. Amd there is also an hourglass interaction, with similar functionality but... did not blog about that one
PS: Check Gallery\Widgets to find all widgets. Default path is set to Interactions
Maybe you are looking for
-
Since with the new update on 6-8-10 my firefox starts up and opens 3 tabs. reg mozilla google page, the themes page, and a download helper add on main page. I change me homepage and change me theme but when I close firefox it never saves the theme an
-
DatabaseMetaData.getProcedureColumns not returning correct type
Here's the situation. I have an object type. CREATE TYPE ttest AS OBJECT ( elem1 VARCHAR2(1), elem2 NUMBER(20), I have an Table Type of the Object Type. CREATE TYPE tabtest AS TABLE of ttest; I'm using the Table Type as input to a package function. f
-
Since ff upgraded to 7 it opens in maximum mode,how do I change it to open in normal mode
since FF upgraded to version 7 my window comes up in maximum size,how can I fix it to come up in normal size?
-
Cheat Sheet for texting on Nokia Murial
Does anyone have a cheat sheet for texting for someone that really has not ever texted before. I have a Nokia Murial.
-
The web sight says "Something is wrong, unable to set up your account.