Speeding up execution
I am working on a MIMO channel emulator. The system has a large number of blocks which have to finish execution within the coherence time of the channel, which is a fraction of a milli second. However due to the complexity of some of the operations, I have had to use Matlab scripts to execute them. Now the entire cblock takes around half a second to run, which is far from my goal. Is the invocation of Matlab slowing down the operation? How do I speed up the execution to finish within the coherence time?
Thanks
There are quite a few things you can do to speed up your program. A factor of 3 to 5 is probably acheivable. Who knows how much more?
1. You have some duplicated code and a lot of unnecessary code.
2. You do a lot of things the hard and slow way.
3. I do not have Matlab so I cannot test the speed of the scripts. Since they call Matlab to run the scripts, I suspect the process is slower, even if Matlab can do the calculation faster.
4. Wires running behind other objects and right to left make a LV program hard to read.
5. At least one part probably does not work. The code executes but may not produce the result you expect.
OK. Specifics.
1. You have constants 6 and 20 several places in the code. If your system ever changes you will need to find and change all of them. It is usually better to have one of each and wire them to all the places used. No speed penalty but a lot of work troubleshooting if something needs to be changed.
2a. Using the Continuous Random.vi to generate 1 sample, as an array, then indexing the sample out takes about 3 times longer than using the Random Number (the Dice) from the Numeric palette. It also produces uniformly distributed random numbers between 0 and 1. There are 8 or 9 of these constructions in your program. Add about 200 ns each time one is called.
2b. Express VIs are almost always slower than using LV primitve functions. The Angle Spread calculation (multiply, add, power of 10) is 260 ns slower on my computer. The Random Delays (ln(x)) is about 250 ns slower.
2c. Expression nodes are slower than using LV primitve functions. The simple i+1 in a for loop takes 4.3 ns compared to 1.6 ns for the +1 primitive in the Numeric palette. Calculating the subpath offsets in a Formula Node takes 1 us compared to 280 ns with a case structure. Since it never changes, it should be calculated once, outside the loop. Then the time does not matter but you save a microsecond on every iteration.
5. The AoD sorting: step 6 involves equality comparison of floating point values. This will often produce a false value for inputs which may be very close due to the finite representation of numbers in binary. Using the >0 test or In Range and Coerce?, depending on exactly what you are trying to do is probably a better method.
Lynn
Similar Messages
-
Measuring speed of execution of loops
Hi,
I wanted to know how to measure the speed of execution of each loop in case of a stached sequence structure. The loop basically has an input for steps for measurement of the Capacitance Voltage characteristics. Now, I was keen to know how do I measure the speed of operation during any particular step. I found something was the Profile VIs but is there any better way to measure the speed of operation? I am attaching herewith the VI that is being used.It has several sub-VIs.But, YOu can just get an idea of what I need by browsing through this VI.
Attachments:
CVWAIT33.VI 85 KBIn general, a good timing device is a 3 frame sequence. The first and third frames have a ms counter (from the Time & Dialog palette) and the second has the code. Then, you substract the value in the first frame from the value in the third and you know how many ms it took. The problem is that many things take less than a ms to execute and that is something you can only measure statistically, by running your code many times in a for loop (let's say a million times).
If you want iteration times, you can accomplish the same by using a shift register to keep the value of the ms counter from the last iteration and substract it from the current counter value.
Try to take over the world! -
Query's Execution time VS Cost.
hello,
I have tuned a query for speed of execution. The result is the query executed in less than 1 sec whereas the older one took more than 50Sec.
But the problem is the new query is very much costly (Cast as taken from executio plan is 54K).
Please tell me, is this 54k a very high value? Is it taxing on the server resources when nearly 1000 users run the same query at the same time?
Thanks,
Aswin.
Edited by: ice_cold_aswin on Sep 9, 2008 4:26 PMExecution plan #1:
Time: 52 Sec
Plan:
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 68192 | 4661K| | 4863 (2)|
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_LOC | 1 | 38 | | 4 (0)|
| 2 | INDEX RANGE SCAN | PK_LOCS | 1 | | | 3 (0)|
| 3 | SORT AGGREGATE | | 1 | 14 | | |
| 4 | TABLE ACCESS FULL | TABLE_PAT | 10 | 140 | | 1815 (3)|
| 5 | SORT AGGREGATE | | 1 | 14 | | |
| 6 | TABLE ACCESS FULL | TABLE_PAT | 18 | 252 | | 1815 (3)|
| 7 | SORT UNIQUE | | 68192 | 4661K| 10M| 3729 (2)|
| 8 | HASH JOIN | | 68192 | 4661K| | 2596 (3)|
| 9 | TABLE ACCESS FULL | TABLE_PHC | 1628 | 26048 | | 9 (0)|
| 10 | HASH JOIN | | 68192 | 3596K| 2072K| 2585 (3)|
| 11 | TABLE ACCESS FULL | TABLE_PAT | 68192 | 1265K| | 1818 (3)|
| 12 | TABLE ACCESS FULL | TABLE_LOC | 103K| 3547K| | 430 (1)|
--------------------------------------------------------------------------------------------Execution Plan #2
Time taken to exec: 0.8 Sec
Plan:
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 3990K| 799M| | 5678K (1)|
| 1 | SORT ORDER BY | | 3990K| 799M| 1685M| 5678K (1)|
| 2 | HASH JOIN RIGHT OUTER | | 3990K| 799M| | 5495K (1)|
| 3 | TABLE ACCESS FULL | TABLE_PHC | 1628 | 37444 | | 9 (0)|
| 4 | HASH JOIN OUTER | | 3990K| 711M| | 5495K (1)|
| 5 | TABLE ACCESS FULL | TABLE_LOC | 17299 | 641K| | 434 (2)|
| 6 | VIEW | | 23M| 3402M| | 5495K (1)|
| 7 | SORT UNIQUE | | 23M| 17G| 40G| 5495K (51)|
| 8 | UNION-ALL | | | | | |
| 9 | HASH JOIN OUTER | | 11M| 9178M| 13M| 6166 (5)|
| 10 | VIEW | | 34599 | 13M| | 2337 (3)|
| 11 | HASH JOIN RIGHT OUTER| | 34599 | 2905K| | 2337 (3)|
| 12 | VIEW | | 11789 | 552K| | 1902 (3)|
| 13 | HASH GROUP BY | | 11789 | 276K| 936K| 1902 (3)|
| 14 | TABLE ACCESS FULL | TABLE_PAT | 11789 | 276K| | 1816 (3)|
| 15 | TABLE ACCESS FULL | TABLE_LOC | 34599 | 1283K| | 434 (2)|
| 16 | VIEW | | 34599 | 13M| | 2300 (3)|
| 17 | HASH JOIN RIGHT OUTER| | 34599 | 2905K| | 2300 (3)|
| 18 | VIEW | | 6430 | 301K| | 1865 (3)|
| 19 | HASH GROUP BY | | 6430 | 150K| 520K| 1865 (3)|
| 20 | TABLE ACCESS FULL | TABLE_PAT | 6430 | 150K| | 1815 (3)|
| 21 | TABLE ACCESS FULL | TABLE_LOC | 34599 | 1283K| | 434 (2)|
| 22 | HASH JOIN OUTER | | 11M| 9178M| 13M| 6166 (5)|
| 23 | VIEW | | 34599 | 13M| | 2300 (3)|
| 24 | HASH JOIN RIGHT OUTER| | 34599 | 2905K| | 2300 (3)|
| 25 | VIEW | | 6430 | 301K| | 1865 (3)|
| 26 | HASH GROUP BY | | 6430 | 150K| 520K| 1865 (3)|
| 27 | TABLE ACCESS FULL | TABLE_PAT | 6430 | 150K| | 1815 (3)|
| 28 | TABLE ACCESS FULL | TABLE_LOC | 34599 | 1283K| | 434 (2)|
| 29 | VIEW | | 34599 | 13M| | 2337 (3)|
| 30 | HASH JOIN RIGHT OUTER| | 34599 | 2905K| | 2337 (3)|
| 31 | VIEW | | 11789 | 552K| | 1902 (3)|
| 32 | HASH GROUP BY | | 11789 | 276K| 936K| 1902 (3)|
| 33 | TABLE ACCESS FULL | TABLE_PAT | 11789 | 276K| | 1816 (3)|
| 34 | TABLE ACCESS FULL | TABLE_LOC | 34599 | 1283K| | 434 (2)|
----------------------------------------------------------------------------------------------Thanks in Advance,
Aswin.
Edited by: ice_cold_aswin on Sep 9, 2008 4:32 PM -
How to find data compression and speed
1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a 10:1 data compression.
2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
I always thought that there must a better way of checking the speed of execution like checking the log which gives all data regarding executions , than just seeing the output window query executions.Rajarshi Muhuri wrote:
1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a 10:1 data compression.
The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
> 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
>
> I always thought that there must a better way of checking the speed of execution like checking the log which gives all data regarding executions , than just seeing the output window query executions.
Well, CPU cycles wouldn't be an absolute measure as well.
Think about the time that is not spend on the CPU.
Wait time for locks for example.
Or time lost because other processes used the CPU.
In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
In the end, the actual response time is what means money to business processes.
So that's what we're looking at.
And there are some tools available for that. The performance trace for example.
And yes, query runtimes will always differ and never be totally stable all the time.
That is why performance benchmarks take averages for multiple runs.
regards,
Lars -
Onboard Wait On High Speed Capture
I would like for an onboard program to wait for a high speed capture signal from a trigger input. Unfortunately, I have not had success with the flex_wait_on_condition function; it has always timed out before detecting the event. However, calls to the function flex_read_hs_cap_status identify that the high speed capture line is indeed toggling faster than the 3 second timeout. I use the following sequence of functions to configure the high speed capture:
flex_configure_hs_capture(m_BoardID, NIMC_AXIS2, NIMC_HS_LOW_TO_HIGH_EDGE, 0);
flex_begin_store(m_BoardID, ProgramNumber);
flex_enable_hs_capture(m_BoardID, NIMC_AXIS2, NIMC_TRUE);
flex_wait_on_condition(m_BoardID, NIMC_AXIS2, NIMC_WAIT, NIMC_CONDITION_HIGH_SPEED_CAPTURE, 0, 0,
NIMC_MATCH_ANY, 30, 0);
flex_end_store(m_BoardID, ProgramNumber);
Axis 2 is configured as a open loop stepper axis with encoder resource 2 mapped to it.
Any thoughts as to why this wouldn't work?
Thanks!Thanks for the suggestion. It seems to work fairly well, although there is some delay between the trigger event and the execution of the critical section of code.
Are you aware of a method to speed up execution of an on-board program? The critical section of code in the attached program fragment takes about 4ms to execute. With the added delay of the polled high speed capture line, I am limited to a ~150 Hz loop. I would like to increase the execution time by about twice.
Also, a command from the host computer seems to preempt the on-board program, causing it to take up to ten times as long to complete. Is there a way to set the priority of the on-board program task above host communication?
Thanks for you assistance,
Mike
flex_insert_program_label(m_BoardID, LABEL_LOOP_START); // main program loop
flex_read_hs_cap_status(m_BoardID, NIMC_AXIS3, DATA_HS_CAP_STATUS); // check if high speed capture triggered
flex_and_vars(m_BoardID, DATA_HS_CAP_STATUS, DATA_HS_CAP_STATUS_MASK, DATA_HS_CAP_STATUS_MASKED); // AND high speed capture with trigger 3 mask
flex_jump_label_on_condition(m_BoardID, NIMC_AXIS3, NIMC_CONDITION_EQUAL, NIMC_FALSE, NIMC_FALSE, NIMC_MATCH_ANY, LABEL_LOOP_START); // if trigger 3 not triggered, jump to main program loop
// Critical Section Code >>>
flex_set_breakpoint_momo(m_BoardID, NIMC_AXIS3, 0x08, 0x00, 0xFF); // set digital output high
flex_enable_hs_capture(m_BoardID, NIMC_AXIS3, NIMC_TRUE); // re-enable the high-speed capture
flex_read_adc(m_BoardID, NIMC_ADC1, DATA_ANALOG_INPUT_1); // read the analog input
flex_write_buffer(m_BoardID, ANALOG_INPUT_BUFFER, 1, 0, &UselessLong, DATA_WRITE_TO_BUFFER_NUM_PTS); // write the analog input to the buffer
flex_read_buffer(m_BoardID, VELOCITY_PROFILE_BUFFER, 1, DATA_VELOCITY_CMD); // read the next velocity profile point
flex_load_velocity(m_BoardID, NIMC_AXIS3, UselessLong, DATA_VELOCITY_CMD); // set the axis velocity
flex_start(m_BoardID, NIMC_AXIS3, 0); // update the velocity by calling start
flex_set_breakpoint_momo(m_BoardID, NIMC_AXIS3, 0x00, 0x08, 0xFF); // set digital output low
// <<< Critical Section Code
flex_jump_label_on_condition(m_BoardID, NIMC_AXIS3, NIMC_CONDITION_TRUE, NIMC_FALSE, NIMC_FALSE, NIMC_MATCH_ANY, LABEL_LOOP_START); // jump to main program loop
flex_end_store(m_BoardID, ProgramNumber); // stop program store -
I am working on a 2200 row table with 20 columns. Datatypes a mixture of smallish varchar, integer and date.
Oracle 8.1.5.2 on Redhat Linux 6.1. Pentium III 500, 256MB.
JRE and JDK 1.1.6v5
I have output the table to an XML document (java OracleXML getXML ....). This takes about 25 seconds, which is fine (although spool to a table takes only 3 seconds). The resulting file size is 1.2mb
However, when I re-load the table (java OracleXML putXML ...) it takes 35 minutes, which is way slower than I want or expected...
As a comparison, I have also output the table to a tab-delimited file, which I have loaded using SQL*Loader. This load takes about 10 seconds.
So my questions are:
1. Why is it so slow on the XML load? Am I doing something wrong; can I tune somehow?
2. I could implement an XML loader in PRO*C (presumably the C XML parser will help here), but is there a better solution?
nullHi,
We have improved the speeds of execution of the generation component recently after doing some deep analysis. But there are going to be overheads with regard to creating Strings.
With respect to insert, 35 minutes is completely unacceptable. The reason is that internally, we parse the document into a DOM object and then bind it row by row to an insert statement. We are working next to improve this performance by using direct load APIs and using SAX instead of DOM.
It would also help if u can send the SQL script (with all sensitive info blocked), which would be useful for us to take a look at and do some improvements on the current engine.
But currently, if u want to increase the speed the only way is to transform the XML data using XSL or otherwise into SQL loader format and directly load it.
Thx
Murali -
Serious performance issue with LV 7.1 Development Environment
I'm posting this issue to the forums prior to submitting a bug report to ensure that the problems I'm having are reproducible. To reproduce this bug, you're going to have to be an advanced user with a large project (hundreds to thousands of VIs) as the performance problem is related to the number of VIs loaded.
The issue is the increasingly poor performance of the LabVIEW 7.1 Development Environment as the number of VIs in memory increases. The actions affected include switching between front panel and diagram, saving a VI, copy and paste, clicking a menu, and the mysterious time spent (compiling? editing the runtime menu? changing the toolbar state?) between pressing the run button and when the code actually starts executing. Scrolling and, interestingly, copying via a control-drag are not affected. Running time of entirely on-diagram code does not seem to be affected.
These problems are quite severe and significantly decrease my productivity as a programmer. Since they are development environment UI issues, it's been difficult for me to find a good example VI; the best I can do is the attached "LV Speed Test.vi". It doesn't test the issues that affect me most, but it seems to suffer from the same problem.
This simple VI just shows and hides the menu bar 100 times in a tight for loop. When it is the only VI loaded, it executes in about 350 msec on my machine. (2.4 GHz P-IV/640 MB RAM/Win2k). However, when I load a single project-encompassing VI (let's call it the "giant") that references a total of about 900 user and VI-lib subVIs, the test routine takes almost a minute and half to run...about 240 times slower! I've tried this on my laptop with similar results.
The problem appears to be related to the *number* of VIs loaded and not the memory utilization. For example, if I close the "giant", and create a new VI ("memhog") that does nothing but initialize a length 20,000,000 array of doubles and stores it in an uninitialized shift register, LabView's overall memory usage (as shown in the task manager) jumps enormously, but LV Speed Test executes in about 450 msec...only slightly slower than with a fresh copy of Labview.
The problem seems to be related to excessive context switching. The Windows task manager shows over a thirteen hundred page faults occur when "LV Speed Test" is run with the "giant" in the background, versus zero or none when run by itself or when "memhog" has used up 160+MB of space.
The problem only seems to affect the frontmost window. (Unfortunately, that's where we LV programmers spend all of our time!) If you start "LV Speed Test" and then put "giant" in the foreground "LV Speed Test" runs much faster. In fact, if you use the VI server to put the "giant" VI in the foreground programmatically at the start of "LV Speed Test", execution time drops back to 450 msec, and there are no page faults!
These results show the issue is not related to video drivers or the Windows virtual memory system. My suspicion is that there is a faulty routine in LV 7.1 that is traversing the entire VI hierarchy every time certain events are thrown in the foreground window. It could be due to a problem with the Windows event tracking system, but this seems less likely.
I have been programming LV for about 7 years and switched from LV 6.1 to 7.1 about four months ago. I know NI engineers have put thousands of hours developing and testing LV 7.1, but honestly I find myself wishing I had never upgraded from using LV 6.1. (To whomever thought "hide trailing zeros" should be the default for floating point controls...what were you thinking?!)
I know each new version of LabView causes old-timers like me to grouse that things were better back in the days when we etched our block diagrams on stone tablets, etc., and honestly I'm not going to go back. I am committed to LabView 7.1. I just wish it were not so slow on my big projects!
Attachments:
LV_Speed_Test.vi 22 KBHi,
I can confirm this behavior. Setting the execution system to "user
interface" helps a bit, but there is still a big difference.
I get a feeling it has something to do with window messages, perhaps
WM_PAINT or something, that is handled differently if a VI is not
frontmost... But what do I know...
Don't know if it should be called a bug, but it sure is something that could
be optimized.
Regards,
Wiebe.
"Rob Calhoun" wrote in message
news:[email protected]...
> I'm posting this issue to the forums prior to submitting a bug report
> to ensure that the problems I'm having are reproducible. To reproduce
> this bug, you're going to have to be an advanced user with a large
> project (hundreds to thousands of VIs) as the performance problem is
> related to the number of VIs loaded.
>
> The issue is the increasingly poor performance of the LabVIEW 7.1
> Development Environment as the number of VIs in memory increases. The
> actions affected include switching between front panel and diagram,
> saving a VI, copy and paste, clicking a menu, and the mysterious time
> spent (compiling? editing the runtime menu? changing the toolbar
> state?) between pressing the run button and when the code actually
> starts executing. Scrolling and, interestingly, copying via a
> control-drag are not affected. Running time of entirely on-diagram
> code does not seem to be affected.
>
> These problems are quite severe and significantly decrease my
> productivity as a programmer. Since they are development environment
> UI issues, it's been difficult for me to find a good example VI; the
> best I can do is the attached "LV Speed Test.vi". It doesn't test the
> issues that affect me most, but it seems to suffer from the same
> problem.
>
> This simple VI just shows and hides the menu bar 100 times in a tight
> for loop. When it is the only VI loaded, it executes in about 350 msec
> on my machine. (2.4 GHz P-IV/640 MB RAM/Win2k). However, when I load a
> single project-encompassing VI (let's call it the "giant") that
> references a total of about 900 user and VI-lib subVIs, the test
> routine takes almost a minute and half to run...about 240 times
> slower! I've tried this on my laptop with similar results.
>
> The problem appears to be related to the *number* of VIs loaded and
> not the memory utilization. For example, if I close the "giant", and
> create a new VI ("memhog") that does nothing but initialize a length
> 20,000,000 array of doubles and stores it in an uninitialized shift
> register, LabView's overall memory usage (as shown in the task
> manager) jumps enormously, but LV Speed Test executes in about 450
> msec...only slightly slower than with a fresh copy of Labview.
>
> The problem seems to be related to excessive context switching. The
> Windows task manager shows over a thirteen hundred page faults occur
> when "LV Speed Test" is run with the "giant" in the background, versus
> zero or none when run by itself or when "memhog" has used up 160+MB of
> space.
>
> The problem only seems to affect the frontmost window. (Unfortunately,
> that's where we LV programmers spend all of our time!) If you start
> "LV Speed Test" and then put "giant" in the foreground "LV Speed Test"
> runs much faster. In fact, if you use the VI server to put the "giant"
> VI in the foreground programmatically at the start of "LV Speed Test",
> execution time drops back to 450 msec, and there are no page faults!
>
> These results show the issue is not related to video drivers or the
> Windows virtual memory system. My suspicion is that there is a faulty
> routine in LV 7.1 that is traversing the entire VI hierarchy every
> time certain events are thrown in the foreground window. It could be
> due to a problem with the Windows event tracking system, but this
> seems less likely.
>
> I have been programming LV for about 7 years and switched from LV 6.1
> to 7.1 about four months ago. I know NI engineers have put thousands
> of hours developing and testing LV 7.1, but honestly I find myself
> wishing I had never upgraded from using LV 6.1. (To whomever thought
> "hide trailing zeros" should be the default for floating point
> controls...what were you thinking?!)
>
> I know each new version of LabView causes old-timers like me to grouse
> that things were better back in the days when we etched our block
> diagrams on stone tablets, etc., and honestly I'm not going to go
> back. I am committed to LabView 7.1. I just wish it were not so slow
> on my big projects! -
How do I control the quality of JPEG images?
I've written a program that scales a set of JPEG images down to various dimensions. I'm happy with the speed of execution, but quality of the images could be better. How do I specify the quality of the JPEG images I create? In graphics editors, I'm given the option of controlling the lossy-ness (?) of JPEGs when I save them, either reducing image qualify to shrink the file size or vice versa. How can I do this programmatically?
ThanksHi Jhm,
leaving aside the scaling algorithm, to save an arbitrary image with 100% quality you'd use
something like the following code snipet below
regards,
Owen
// Imports
import java.awt.image.*;
import com.sun.image.codec.jpeg.*;
public boolean saveJPEG ( Image yourImage, String filename )
boolean saved = false;
BufferedImage bi = new BufferedImage ( yourImage.getWidth(null),
yourImage.getHeight(null),
BufferedImage.TYPE_INT_RGB );
Graphics2D g2 = bi.createGraphics();
g2.drawImage ( yourImage, null, null );
FileOutputStream out = null;
try
out = new FileOutputStream ( filename );
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder ( out );
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam ( bi );
param.setQuality ( 1.0f, false ); // 100% high quality setting, no compression
encoder.setJPEGEncodeParam ( param );
encoder.encode ( bi );
out.close();
saved = true;
catch ( Exception ex )
System.out.println ("Error saving JPEG : " + ex.getMessage() );
return ( saved );
} -
Performance of an UDF v/s standard graphical mapping functions
Hello Experts,
I would like to get your opinion/comments on the performance issues with respect to speed of execution when using graphical functions for doing the date conversion requirement given below:
Requirement is to convert input date u20182008-12-03u2019 from the source side to u201820081203u2019 on the target side.
We have used the standard graphical mapping functions 'substring' & 'replacestring' for doing this conversion as explained here: The u2018substringu2018 function is used to capture the part of the string from the source. A u2018constantu2019 with value u2018u2013u2018 is replaced by u2018constantu2019 (empty value) by using the standard text function u2018replaceStringu2019 in the target side.
We did the same using the following UDF too:
public String convertDate(String dateStringInOriginalFormat) {
SimpleDateFormat originalFormatter = new SimpleDateFormat("yyyy-MM-dd");
SimpleDateFormat newFormatter = new SimpleDateFormat("yyyyMMdd");
ParsePosition pos = new ParsePosition(0);
Date dateFromString = originalFormatter.parse(
dateStringInOriginalFormat, pos);
String dateStringInNewFormat = newFormatter.format(dateFromString);
return dateStringInNewFormat;
From a critical performance point of view, which approach will fare better?
Thanks in Advance,
Earnest A Thomas
Edited by: Earnest Thomas on Dec 4, 2008 6:54 AMHi,
Not only in this case but in general it is always better to use the functions available in MM and only if your requirement is not satisfied with the standard mapping functions then go for UDF.
Also for your requirement no need of going for substring....you can directly use the DateTransform function available.
Source --> DateTransform --> Target
Regards,
Abhishek.
Edited by: abhishek salvi on Dec 4, 2008 11:25 AM -
Skipped seconds in time stamp loop
Hi all,
I have a two 6024E cards which I am synchronizing via a RTSI cable. I
have an 860mHz Dell dimension with 512 MB of RAM. I am simply logging
analog signals from the cards to a text file on a 1s timebases. The
problem is that the output data file shows a time stamp is missed every
once and a while. I have tried adjusting the rate of sampling and
number of samples, but there always appears to be a hiccup in the
timestamping. I even tried hardware timing with a timed loop and one of
the on-board counters. The hiccups were even more frequent then. Maybe
the code needs to be more efficient, but I am stumped. I attached the
data logger vi. I appreciate any input.
Thanks,
dewey
Attachments:
acquire_ver5a.zip 1433 KBHello Dewey,
A couple recommendations for your program:
You are manually generating a timestamp for your signals using an
elaborate algorithm that I haven't really taken the time to
understand. I might be easier, more efficient, and more accurate
to just acquire your data as a Waveform data type from the DAQmx Read
Analog VI instead of acquiring data as an array of doubles. The
Waveform data type includes a built in timestamp that you could parse
out and log to file instead of calculating your own timestamp using the
Tick Count (ms) VI. Take a look at the LabVIEW Help and there is
a section under File I/O about Writing Waveform Data to a File Using
Storage VIs.
I would also highly recommend you remove any Wait or Wait for Next ms
Mutlitple VIs from your program. You are already configuring the
sampling rate of your continuous acquisition earlier in your program
using the DAQmx Timing (Sample Clock) VI. This is configuring the
DAQ tasks to use the hardware clock to use the onboard sample clock of
Dev1 for your acquisition. With the sample clock timing
configured, the NI-DAQmx driver will automatically take care of the
speed of execution of your while loop based on the rate you specify in
the DAQmx Timing VI. By manually including waits in your while
loop, you are conflicting with the timing parameters you set up in your
DAQ task. This could lead to buffer overflows, missed samples,
and otherwise non-deterministic sample timing. I would suggest
taking a look at the LabVIEW DAQmx shipping example Multi-Device
Synch-Analog Input-Continuous Acquisition.VI which can be found in the
NI Example Finder (Help >> Find Examples) in the following
category: Hardware Input and Output >> DAQmx >>
Synchronization >> Multi-Device.
I hope these suggestions help!
Regards,
Travis G.
Applications Engineering
National Instruments
www.ni.com/support -
I am having a problem with my program as I am not getting the
desired frame rate due to all the code that is getting executed per
tick. So I have some questions about director and lingo as to which
way actually executes faster.
1. Multiple
ExitFrame calls vs a single
ExitFrame call.
I have alot of sprites in my app. Almost all of them have an
ExitFrame handler.
Question: is it faster to have each sprite handle it's own
ExitFrame routine and do code specific to that sprite or is
it faster to have one generic
ExitFrame to loop through and execute code for each sprite?
2. Puppeted sprites vs Non-Puppeted sprites.
I have a alot of sprites in my program. To make life ALOT
easier, I simply allocated a good chunk of sprite channels to sole
use of "dynamically created sprites". My program can have hunders
of puppeted sprites from any given moment to the next.
Question: Does director progress faster or slower depending
on if a sprite is puppeted or not? Or is there any difference at
all?
3. Checking to see if a variable is set before setting it.
I have only recently come into the Director/Lingo world of
programming. I am originally a VB programmer for almost a decade.
In visual basic, I have noticed that the code executes faster if
you don't do unneeded variable assignments by checking to see if it
was already set.
Example: In visual basic, let's say you have an array of 1000
elements, some elements are already set, some are not.
for i = 1 to 1000
var(i) = i
next
The above code executes fast, but if you are doing that very
very often, it can be a bottle neck.
the below code, while doing the exact same thing, actually is
faster.
for i = 1 to 1000
if var(i) <> i then var(i) = i
next
In VB, it's faster to do a check of a variable than it is to
do the assignment when it's not needed. Now granted, this is a poor
example, usually I am dealing with much more complex routines, but
the basic principle of what I am trying to get across is the same.
Question: in Director/lingo, would it speed up the execution
of code to do a variable check before the assignment, or is the
very act of adding the check going to slow the down the execution?
Anyone have any ideas about these? Or anyone have any other
tips about stupid little things to speed up execution of
code?>
1. Multiple
ExitFrame calls vs a single
ExitFrame
> call.
You should consider dropping the exitframe approach, in favor
of an oop
model.
OOP is not faster, as a dual core processor is not faster
than a single core
one running at double the speed. In fact, the second should
be faster, since
there is no synchronization penalty. However, it is much
smoother. Same with
oop, you have a penalty, since you are using more objects,
but the objects
can be smart enough to adjust the number of instructions they
execute as
required.
If you e.g. have objects whose coordinates can be calculated
and stored
inside the object, you don't have to update the stage each
time an object
moves. You can do that once, for all objects in set
intervals. Long as the
interval is large enough to handle all intermediate
processing and
updatestage cost, you'll have a very smooth movie.
>
2. Puppeted sprites vs Non-Puppeted sprites.
Puppeting does not affect performance -or at least it
shouldn't. The number
of sprites, and number of behaviors attached to each sprite
does. However,
even when there is a very large number of sprites active, the
procedure
should be a joke for any modern cpu. What does cost, is
redrawing the
sprites. So, if it's image sprites we are talking about, you
should perhaps
consider a single bitmap member you should use as a buffer,
and imaging
lingo for drawing each frame. The mouse click events can be
evaluated by
keeping a list of virtual sprite positions. Even if not
familiar with the
above, the time you'll invest in learning what is required,
will be rewarded
with a significant -up to times x- performance increase.
>
3. Checking to see if a variable is set before setting it.
You can create a simple lingo benchmarking script to get your
answers. As a
general principle, the less commands the faster. Though not
really into VB
(I find c++ and lingo to be a killer combination), I can
assume why this is
happening: when setting a variable, vb is executing some code
to evaluate
what the value was, and what -if anything- has to be
released. Though
nowhere documented, it seems that several years ago, someone
in the director
dev team was smart enough to take this matter into account
when creating the
object that is known as a lingo variable (64bit internally,
btw). So,
director doesn't suffer slow variable release - releasing
what shouldn't be
released that is.
> Anyone have any ideas about these? Or anyone have any
other tips about
> stupid
> little things to speed up execution of code?
You know, a few years ago, lingo performance/speeding up
director was a
regular discussion issue in this list. This is not the case
anymore. And
though I can guess a couple reasons why, I found none to be
qualified as an
explanation.. Not in my book at least. Case you have any more
questions, I'd
be happy to answer. Building a site with director performance
hints /
optimizing lingo code is high in my to do list.
"DaveGallant" <[email protected]> wrote in
message
news:[email protected]...
>I am having a problem with my program as I am not getting
the desired frame
> rate due to all the code that is getting executed per
tick. So I have some
> questions about director and lingo as to which way
actually executes
> faster.
>
>
1. Multiple
ExitFrame calls vs a single
ExitFrame
> call.
>
> I have alot of sprites in my app. Almost all of them
have an
>
ExitFrame
> handler.
> Question: is it faster to have each sprite handle it's
own
>
ExitFrame
> routine and do code specific to that sprite or is it
faster to have one
> generic
>
ExitFrame to loop through and execute code for each sprite?
>
>
2. Puppeted sprites vs Non-Puppeted sprites.
>
> I have a alot of sprites in my program. To make life
ALOT easier, I simply
> allocated a good chunk of sprite channels to sole use of
"dynamically
> created
> sprites". My program can have hunders of puppeted
sprites from any given
> moment
> to the next.
> Question: Does director progress faster or slower
depending on if a sprite
> is
> puppeted or not? Or is there any difference at all?
>
>
3. Checking to see if a variable is set before setting it.
>
> I have only recently come into the Director/Lingo world
of programming. I
> am
> originally a VB programmer for almost a decade. In
visual basic, I have
> noticed
> that the code executes faster if you don't do unneeded
variable
> assignments by
> checking to see if it was already set.
>
> Example: In visual basic, let's say you have an array of
1000 elements,
> some
> elements are already set, some are not.
>
> for i = 1 to 1000
> var(i) = i
> next
>
> The above code executes fast, but if you are doing that
very very often,
> it
> can be a bottle neck.
> the below code, while doing the exact same thing,
actually is faster.
>
> for i = 1 to 1000
> if var(i) <> i then var(i) = i
> next
>
> In VB, it's faster to do a check of a variable than it
is to do the
> assignment
> when it's not needed. Now granted, this is a poor
example, usually I am
> dealing
> with much more complex routines, but the basic principle
of what I am
> trying to
> get across is the same.
>
> Question: in Director/lingo, would it speed up the
execution of code to do
> a
> variable check before the assignment, or is the very act
of adding the
> check
> going to slow the down the execution?
>
>
>
> Anyone have any ideas about these? Or anyone have any
other tips about
> stupid
> little things to speed up execution of code?
> -
Oracle 10g and parallel query question
Hi Oracle on SAP Gurus!
We are currently thinking of activating parallel query for certain segments (large application tables and indexes). We searched in SAPNet and SDN and have also studied the SAP Note 651060. But we did not find a complete answer to the following question which is very important for us:
Which kinds of queries (despite from full table scan and index scan in partitioned indexes) support parallel queries and which ones do not support parallel queries?
This is important for us to find out whether we have candidates for parallel queries or not.
Thanx for any hint!
Regards,
VolkerBut why do you not propose to use parallel query in OLTP systems?
If the queries are accessed very frequently you will just run out of cpu and io ressources. OLTP systems are (historical) typically multi user systems. You can off course use PQ for 'single user' activities, like index rebuilds, some batchjobs, but you shouldn't do for frequent user queries.
If you have time look at this interesting Article [Suck It Dry - Tuning Parallel Execution|http://doug.burns.tripod.com/px.html]
It is quite old, and you don't have to read all tech details, but i recommend having a look at the conclusions at the end.
May it make sense to use partitioning of these tables in conjunction with parallel query?
I know some guys, who do partitioning on OLTP systems, even SAP systems. But they don't use PQ then. The use partitioning to work on a smaller set of data. In your case the range scans, would need to scan only one partition, saving buffer cache and effectively speeding up execution. So you don't need PQ to scan all partitions at all, this would be a typical OLAP approach.
Best regards
Michael -
How much time does CATPATCH.sql take
Hi,
Iam upgrading my db from 9.2.0.1 to 9.2.0.4 . Its been 2 hrs since the catpatch.sql script is runnig. When i queried dba_registry it is showing status as loading for packege & body.
Can any tell me how much does this script takes.
Thanks in advance.
BhaveshThe speed of execution is very dependent on the memory and CPU of the machine in which the patch is being applied. I have seen times ranging from 30 minutes to 4 hours. The four hours was experienced with a database in a virtual machine with 512MB of RAM.
-
Hi Experts
I am calcutating rebate percentage for delivery for which I am writing one query which is below and taking more time to execute.
select single
knumh
from
konh
into
l_cond_rec
where
knuma_bo = IS_BIL_INVOICE-hd_gen-KNUMA.
Can I get KNUMH using any other fuction module or from any other table which will increase the speed of execution. FM Condition_Record_Read is not applicable for me. Please reply.
With Regards
Venkat, IBMHi Venkat,
I think you make a table index for the keys you are using to retrieve data from KONH table.
regrds,
Lokesh -
What is the "EventLoggingEnumLogEntry" function in the run-time engine?
I have some code which isn't utilizing all of the processors on my system, but it should be, so I profiled the execution of the compiled executable code using "Intel VTune Amplifier". That program told me that my code is spending a lot time in synchronization or threading overhead, and that the main function that its constantly executing (>50%) is called "EventLoggingEnumLogEntry", and it's contained in lvrt.dll. It's also spending a fair amount of time executing a function "LvVariantCStrSetUI8Attr".
Can anyone tell me what these functions are and what in my LV code would activate them? I can't even start to find where the bottleneck is in my code with the information I currently have...
Thanks!Joey,
Thanks for the reply!
I've already parrallelized the loops, and the problem is that each core is only using about 25-40% of its capacity. I can't figure out what is limiting the speed of execution on each core. One of the papers I read suggested that I can find the bottleneck using the Intel VTune program. I used that instead of the Labview native profiler because the LV profiler will tell me where my code is spending most of its time, but it doesn't tell me if there is some other bottleneck (like perhaps one thread is waiting for another or something like that). I had already taken the information in the profiler as far as I know how to, by finding and optimizing the most commonly executed code in my application.
There is an event structure in my application, but once I press the "GO" button, which kicks off the intensive computing, I have the event structure disabled by placing it within a case structure and using a boolean to keep LV from executing the event structure. Under certain conditions (like completion of the computing), the boolean will be flipped and enable the event structure, but that only happens AFTER the computation is complete. I've confirmed that LV is not executing the event structure with breakpoints.
Does the mere presence of the event structure, even without execution, cause LV to spend extra resources checking for events? I also tried seperating the application into two while loops - one containing the event structure, and one containing the processing code. When the first while loop terminates (again, when pressing the "GO" button), the second will execute with no case structure in the loop. That did not seem to relieve the bottleneck.
This thread is the first thread that I posted regarding this topic. One of the last replies includes some of my code, if you are curious to look at it.
Thanks!
Maybe you are looking for
-
When I try to open iTunes it says that it can't and I have no idea how to fix the problem. I've uninstalled and reinstalled iTunes 3 times and the same thing happens each time. Please help me!!
-
Hi, I've just installed mavericks but my HPc4180 will not print and is showing a 'blank' page in previews can anyone help me?
-
Aperture and MobileMe galleries
Hi, I have the problem that the sorting criteria of my MobileMe galleries always change during the synchronisation process. I want the newest images to be shown on top. That's why I choose "date" as criteria. When I sync the gallery the sort criteria
-
Xml loading using xpath in pl/sql
Hi, I am loading xml using xpath query bu i am stucked here.I am not able to pass the loop counter "I" into the query.Please help. The problem is in line " 13 FROM xml_table,table(XMLSequence(extract(OBJECT_VALUE, '/BIF/SBI/SBC/flag'))) li; " in the
-
Inner Join for Dynamic Select statement
Hi All, Can some one please help me in rewriting the below select statement where i have to remove the existing table1 by putting a dynamic table name which has the same table structure. select a~zfield1 a~zfield2 from ztab1 as a