NIO Performance Test Result
Dear forum users.
I wonder why "New I/O"(java.nio.*) is usefuel ?
I tested "New I/O" performance.
plz, see the code below..
public class ByteBufferPerformanceTest {
public static void main(String[] args) {
File fileName = new File("c:\\kandroid_book_3rd_edition[1].pdf"); // 20MB file
// ByteBuffer Usage
long start1 = System.nanoTime();
try {
int data=0;
FileInputStream fis = new FileInputStream(fileName);
FileChannel fc = fis.getChannel();
ByteBuffer bf = ByteBuffer.allocateDirect(1024);
while( fc.read(bf) != -1 ) {
//System.out.print(new String(bf.array(), 0, 1024));
bf.clear();
fis.close();
} catch (FileNotFoundException ffe) {
ffe.getStackTrace();
} catch (IOException ioe) {
ioe.getStackTrace();
long duration1 = System.nanoTime() - start1;
// BufferedInputStream Usage
BufferedInputStream bin = null;
long start2 = System.nanoTime();
try {
bin = new BufferedInputStream( new FileInputStream(fileName) );
byte[] contents = new byte[1024];
int bytesRead = 0;
String strFileContents;
while ((bytesRead = bin.read(contents)) != -1) {
//strFileContents = new String(contents, 0, bytesRead);
//System.out.print(strFileContents);
} catch (FileNotFoundException ffe ) {
ffe.getStackTrace();
} catch ( IOException ioe ) {
ioe.getStackTrace();
} finally {
try {
if ( bin != null)
bin.close();
}catch ( IOException e ) {
e.getStackTrace();
long duration2 = System.nanoTime() - start2;
// FileReader Usage
long start3 = System.nanoTime();
try {
FileReader fr = new FileReader(fileName);
BufferedReader br = new BufferedReader(fr);
String line;
while( (line=br.readLine())!=null ) {
br.close();
} catch ( FileNotFoundException ffe) {
ffe.getStackTrace();
} catch ( IOException ioe ) {
ioe.getStackTrace();
long duration3 = System.nanoTime() - start3;
System.out.println(String.format("%20s : %12d", "ByteBuffer", duration1));
System.out.println(String.format("%20s : %12d", "BufferedInputStream", duration2));
System.out.println(String.format("%20s : %12d", "FileReader", duration3));
}Result(nanoTime)
ByteBuffer : 60107360
BufferedInputStream : 22748701
FileReader : 597288203
result.. as you can see, best class for file I/O is BufferedInputStream.
so i mean, why is it need to use ByteBuffer ?
am i tested it wrong way ?
thx, for reading. thank you very much ..:)
First of all: you're test is very, very flawed in multiple ways:
1.) You try reading the same file 3 times. The first one will take the cache hit, while the OS actually loads the file from the disk and the others will just test how fast accessing the OS cache is
2.) You're only doing a single read of the file and didn't tell us if you tried that experiment multiple times (to avoid small timing difference to influence the result)
3.) your three methods do different things. Specifically the last method converts the bytes to Strings which is meaningless for a binary file and takes additional time.
All that being said: NIO isn't simply "faster". It provides ways to implement non-blocking IO for tasks such as servers supporting a massive amount of connections and similar high-performance scenarios. If you simply want to read a file once, then "normal" IO will be perfectly fine for you.
Similar Messages
-
Performance Test resulting in more EJB bean instances
Hi Guys,
I am trying to profile my application using OptimizeIT.
If I conduct a load test using Load Runner , But for the test I am using only
virtual client continously repeating the same operation for a period of an hour
or so. I expect only entity bean instance to cater to the needs . What I observe
from OptimizeIT is the number of instances of entity bean continously increases
My question is when the same thread is doing the operation the Entity Bean instance
which catered to the need during the first round should be able to process client
request second time. Why should the bean instance continously increase?
Thanks in advance,
KumarKumar Raman wrote:
Hi Rob,
I am unable to send the .snp file as the file size is coming out to be 6 MB which
our mail server is not allowing to send thorough (we have a corporate limit of
3MB). If U have any other way across please let me know.Did you try compressing it? Or, just split it in multiple files and
send them separately. If none of that works, send me a private email,
and I can get you a FTP upload.
>
As regards to 2 questions
1) I know as to why two instances are getting created as I can see the code here.
But I really wanted to know as to when these instances be released from the memory
? They'll be kept in the cache at least until the transaction ends. Since
you're deleting them, they'll be removed from the cache and sent to the
pool when the tx completes.
Is this going to be there till the pool size defined is filled? I haven,t defined
any pool size in our configuration. I feel the default size is 1000.
Yes, they will be in the pool, and the default pool size is 1000.
2) As regards to 2nd question , the add/delete are running in different transaction.
I wanted to know as to whether the instances created during add , be used for
delete operation as well.
They can/should be the same instance. What is your concurrency-strategy
setting for this bean? I know in the past that exclusive concurrency
was not reusing bean instances as well as some of the other concurrency
strategies (eg database / optimistic).
3) Also for each of the bean instance will there be corresponding home instances
also floating in memory. I feel the home instances should be reusable.
There's just 1 home instance for the deployment, not 1 per bean.
In case of simple Entity bean creation in weblogic, how many objects will be
created vis. a vis , home object , remote object so on...
You'll need a bean interface (local and/or remote) and a bean
implementation class.
As the number of instances which OptimizeIT shows is beyond my understanding.
I wanted to ensure is there any configuration to help me optimize these creations.
Ok, let's try to get the snapshot to me so I can help you out.
-- Rob
>
Thanks,
Kumar
Rob Woollen <[email protected]> wrote:
Kumar Raman wrote:
Hi,
Actually we are running a scenario using Load Runner tool to add arow onto a
DB using an Container managed Entity Bean. This Bean is getting instantiated
using a Session Bean. In the workflow after creation we are deletingthe row in
the table by using the remove method of the same entity bean.
If we analyze using the profiler, the number of EJB instances increasesby 2 during
add and increases by another 2 after delete.Is your session bean only creating one bean?
There seems to be 2 questions:
1) Why are you getting 2 beans on add/delete? I'm not sure if you
expect this or not.
2) Why are the beans used for the creation not being used again when
you
issue the delete?
For #2, my first question is if the create and remove are both running
in the same transaction?
I am sending the OptimizeIT (ver5.5) snapshots to you by mail.
Haven't received them yet, but they would be very helpful.
-- Rob
Please let me know as to why the instances are increasing inspite explicitlycalling
the remove method in the code.
Thanks,
Kumar
Rob Woollen <[email protected]> wrote:
We'd need a little more information to diagnose this one.
First off, if you have an OptimizeIt snapshot file (the .snp extension
not the HTML output file), I'd be willing to take a look at it and
give
you some ideas. If you're interested, send me an email at rwoollenat
bea dot com.
If you're using a custom primary key class (ie not something like
java.lang.String), make sure it's hashCode and equals method are correct.
Otherwise, it'd be helpful if you gave us some more info about yourtest
and what you're doing with the entity bean(s).
-- Rob
Kumar Raman wrote:
Hi Guys,
I am trying to profile my application using OptimizeIT.
If I conduct a load test using Load Runner , But for the test I amusing only
virtual client continously repeating the same operation for a periodof an hour
or so. I expect only entity bean instance to cater to the needs .
What
I observe
from OptimizeIT is the number of instances of entity bean continouslyincreases
My question is when the same thread is doing the operation the EntityBean instance
which catered to the need during the first round should be able toprocess client
request second time. Why should the bean instance continously increase?
Thanks in advance,
Kumar -
ActiveX Control recording but not playing back in a VS 2012 Web Performance Test
I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
Window 7 OS 64 bit
IE 8 recorded on 32 bit version
I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.Hi IndyJason,
Thank you for posting in MSDN forum.
As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
Reference:
https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
reasons.
For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Hi there,
I'm working on testing a AJAX and JavaScript Project which has several pages but all in the same URL. I need to test some attribute on the page or parameter past by AJAX or Javascript. Can Web Performance Test work to get what I want?
Thanks,
Hello,
Thank you for your post.
Web performance test is used to test if a server responses correctly and the response is consistent with what we expected. And we test the response speed, the stability and scalability.
The Web Performance Test Recorder records both AJAX requests and requests that were submitted from JavaScript, but
web test does not execute JavaScript. I am afraid that you can’t use web test to test parameter past by AJAX or JavaScript.
Please see:
Web Performance Test Engine Overview
About JavaScript and ActiveX Controls in Web Performance Tests
From the first link, “Client-side scripting that sets parameter values or results in additional HTTP requests, such as AJAX, does affect the load on the server and might require you to manually modify the Web Performance Test to simulate the scripting.”
If you want to execute the function typically performed by script in web test, you need to accomplish it in coded web performance test or a web performance test plugin. Please see:
How to: Create a Coded Web Performance Test
How to: Create a Web Performance Test Plug-In
I am not sure what the ‘some attribute on the page’ is. If you mean that you want to test those controls on the page, you can do coded UI test, which can test that the user interface for an application functions correctly. The coded UI test performs actions
on the user interface controls for an application and verifies that the correct controls are displayed with the correct values. You can refer to this article for detailed information about code UI test:
Verifying Code by Using Coded User Interface Tests
Best regards,
Amanda Zhu [MSFT]
MSDN Community Support | Feedback to us
Develop and promote your apps in Windows Store
Please remember to mark the replies as answers if they help and unmark them if they provide no help. -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
LabVIEW Embedded - Performance Testing - Different Platforms
Hi all,
I've done some performance testing of LabVIEW on various microcontroller development boards (LabVIEW Embedded for ARM) as well as on a cRIO 9122 Real-time Controller (LabVIEW Real-time) and a Dell Optiplex 790 (LabVIEW desktop). You may find the results interesting. The full report is attached and the final page of the report is reproduced below.
Test Summary
µC MIPS
Single Loop
Effective MIPS
Single Loop
Efficiency
Dual Loop
Effective MIPS
Dual Loop
Efficiency
MCB2300
65
31.8
49%
4.1
6%
LM3S8962
60
50.0
83%
9.5
16%
LPC1788
120
80.9
56%
12.0
8%
cRIO 9122
760
152.4
20%
223.0
29%
Optiplex 790
6114
5533.7
91%
5655.0
92%
Analysis
For microcontrollers, single loop programming can retain almost 100% of the processing power. Such programming would require that all I/O is non-blocking as well as use of interrupts. Multiple loop programming is not recommended, except for simple applications running at loop rates less than 200 Hz, since the vast majority of the processing power is taken by LabVIEW/OS overhead.
For cRIO, there is much more processing power available, however approximately 70 to 80% of it is lost to LabVIEW/OS overhead. The end result is that what can be achieved is limiting.
For the Desktop, we get the best of both worlds; extraordinary processing power and high efficiency.
Speculation on why LabVIEW Embedded for ARM and LabVIEW Real-time performance is so poor puts the blame on excessive context switch. Each context switch typically takes 150 to 200 machine cycles and these appear to be inserted for each loop iteration. This means that tight loops (fast with not much computation) consume enormous amounts of processing power. If this is the case, an option to force a context switch every Nth loop iteration would be useful.
Conclusion
LabVIEW Embedded
for ARM
LabVIEW Real-time for cRIO/sbRIO
LabVIEW Desktop for Windows
Development Environment Cost
High
Reasonable
Reasonable
Execution Platform Cost
Very low
Very High / High
Low
Processing Power
Low (current Tier 1)
Medium
Enormous
LabVIEW/OS efficiency
Low
Low
High
OEM friendly
Yes+
No
Yes
LabVIEW Desktop has many attractive features. This explain why LabVIEW Desktop is so successful and is the vast majority of National Instruments’ software sales (and consequently results in the vast majority of hardware sales). It is National Instruments’ flagship product and is the precursor to the other LabVIEW offerings. The execution platform is powerful, available in various form factors from various sources and is competitively priced.
LabVIEW Real-time on a cRIO/sb-RIO is a lot less attractive. To make this platform attractive the execution platform cost needs to be vastly decreased while increasing the raw processing power. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. A single plug-in board no larger than 75 x 50 mm (3” x 2”) with a single unit price under $180 would certainly make the sb-RIO a viable execution platform. The peripheral connectors would not be part of the board and would be accessible via a connector. A developer mother board could house the various connectors, but these are not needed when incorporated into the final product. The recently released Xilinx Zynq would be a great chip to use ($15 in volume, 2 x ARM Cortex A9 at 800 MHz (4,000 MIPS), FPGA fabric and lots more).
LabVIEW Embedded for ARM is very OEM friendly with development boards that are open source with circuit diagrams available. To make this platform attractive, new more capable Tier 1 boards will need to be introduced, mainly to counter the large LabVIEW/OS overhead. As before, these target boards would come from microcontroller manufacturers, thereby making them inexpensive and open source. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. What is required now is another Tier 1 boards (eg. DK-LM3S9D96 (ARM Cortex M3 80 MHz/96 MIPS)). Further Tier 1 boards should be targeted every two years (eg. BeagleBoard-xM (ARM Cortex A8 1000 MHz/2000 MIPS board)) to keep LabVIEW Embedded for ARM relevant.
Attachments:
LabVIEW Embedded - Performance Testing - Different Platforms.pdf 307 KBI've got to say though, it would really be good if NI could further develop the ARM embedded toolkit.
In the industry I'm in, and probably many others, control algorithm development and testing oocurs in labview. If you have a good LV developer or team, you'll end up with fairly solid, stable and tested code. But what happens now, once the concept is validated, is that all this is thrown away and the C programmers create the embedded code that will go into the real product.
The development cycle starts from scratch.
It would be amazing if you could strip down that code and deploy it onto ARM and expect it to not be too inefficient. Development costs and time to market go way down.. BUT, but especially in the industry I presently work in, the final product's COST is extremely important. (These being consumer products, chaper micro cheaper product) .
These concerns weight HEAVILY. I didn't get a warm fuzzy about the ARM toolkit for my application. I'm sure it's got its niches, but just imagine what could happen if some more work went into it to make it truly appealing to wider market... -
[Ann] FirstACT 2.2 released for SOAP performance testing
Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
the fifth release of the industry's first and most comprehensive automated performance
testing tool for Web Services.
As enterprise organizations are beginning to adopt Web Services, the types of Web
Services being developed and their testing needs is in a state of change. Major
software testing solution vendor, Empirix is committed to ensuring that organizations
developing enterprise software using Web Services can continue to verify the performance
of their enterprise as quickly and cost effectively as possible regardless of the
architecture they are built upon.
Working with organizations developing Web Services, we have observed several emerging
trends. First, organizations are tending to develop Web Services that transfer a
sizable amount of data within each transaction by passing in user-defined XML data
types as part of the SOAP request. As a result, they require a solution that automatically
generates SOAP requests using XML data types and allows them to be quickly customized.
Second, organizations require highly scalable test solutions. Many organizations
are using Web Services to exchange information between business partners and have
Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
Organizations need to performance test to these SLAs to avoid financial and business
penalties. Finally, many organizations just beginning to use automated testing tools
for Web Services have already made significant investments in making SOAP scripts
by hand. They would like to import SOAP requests into an automated testing tool
for regression testing.
Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
in Web Services testing by offering the following new functionality:
1. Automatic and customizable test script generation for XML data types – FirstACT
2.2 will generate complete test scripts and allow the user to graphically customize
test data without requiring programming. FirstACT now includes a simple-to-use XML
editor for data entry or more advanced SOAP request customization.
2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
performance test Web Services. Customers using FirstACT today regularly simulate
between several hundred to several thousand users. Empirix will guarantee to
performance test the numbers of users an organization needs to test to meet its business
needs.
3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
directly into the tool on a user-by-user basis. As a result, some users simulated
can import SOAP requests; others can be automatically generated by FirstACT.
Web Services facilitates the easy exchange of business-critical data and information
across heterogeneous network systems. Gartner estimates that 75% of all businesses
with more than $100 million in sales will have begun to develop Web Services applications
or will have deployed a production system using Web Services technology by the end
of 2002. As part of this move to Web Services, "vendors are moving forward with
the technology and architecture elements underlying a Web Services application model,"
Gartner reports. While this model holds exciting potential, the added protocol layers
necessary to implement it can have a serious impact on application performance, causing
delays in development and in the retrieval of information for end users.
"Today Web Services play an increasingly prominent but changing role in the success
of enterprise software projects, but they can only deliver on their promise if they
perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
its graphical user interface and extensive test-case generation capability, FirstACT
is the first Web Services testing tool that can be used by software developers or
QA test engineers. FirstACT tests the performance and functionality of Web Services
whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
the most comprehensive Web Services testing solution that meets or exceeds the changing
demands of organizations testing Web Services for performance, functionality, and
functionality under load.”
Learn more?
Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
Pricing starts at $4,995. For additional information, call (781) 993-8500.Simon,
I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
Thanks for the effort,
Bryan -
Build model - view test result - i have no ROC tab
Dear all,
When I build eg. a classification SVM (Lin.Reg, Naive Bayes)) model and view the test results, in that window I have only these tabs: Performance, Performance Matrix, Lift, Profit. I have no Roc tab. Do you know why? Before I used SQL Developer 3.2.09.30_x64 with this problem. Now I use SQL Developer 3.2.20.09.87_x64 with this problem. When I try to tune the SVM algoritm (so not let it automatic), there are only these tabs: Cost, Benefit, Lift, Profit. But not ROC tab :(((
When I use the old Oracle Data Miner software from 2011, version 11.1.0.3.0 (build 11705) connecting to the same database server (using the same account), and I build a classification SVM model, I have ROC curve.
Can anyone help me to solve this misterious problem?
Than you!!!We don't have "preferred target value" during model building in the new data miner. However, you can use the Transform node to transform your target into 2 classes (preferred target value and others). You then use the output from the Transform node as input source for your model build.
Here is a process to transform your target into 2 classes:
- Create a Transform node
- In Transform node, select the target column, click "Add Transformation" icon on the toolbar
- In the Add Transform dialog, select "Custom" Binning Type, click "Generate Default Bins" button (accept default settings)
- In the "Custom bin values" listbox, remove the non-preferred target values (select the values and click the "Remove Transformation" icon on the toolbar)
- Now, you should have one preferred target value in the "Custom bin values" listbox, click OK to finish
You can now connect the Transform node to a Build node. In the Build node, select the transformed target (it should have the "_BIN" suffix in the name) as the Target for model build.
Hope this help!
Denny -
RMS performance testing using HP Loadrunner
Hi,
We are currently planning on how to do our performance testing of Oracle Retail. We are planning to use HP Loadrunner and use different virtual users for Java, GUI, webservices and database requests. Have anyone here done performance testing in RMS using HP Loadrunner and what kind of setup did you use?
Any tips would be greatly appreciated.
Best regards,
GustavHi Gustav
How is your performance testing of Oracle Retail ? Did you get good results ?
I need to start a RMS/RPM performance testing project and I would like to know how to implement an appropriated structure . Any informations about servers , protocols , tools used to simulate a real production environment would be very appreciated.
Thanks & Regards,
Roberto -
Hi, I have a customer who wants to performance test OID.
Their actual installed data will be 600,000 users, however they want to only query using a sample of 10-20 different usernames. My question is will caching within the database/and or ldap make the results erroneous.
Regards
KevinKevin,
what do you mean by '.. make the results erroneous'? If you're talking about a performance test you want to achieve the best possible result, right? So why don't you want to use either the DB cache or the OID server cache to achieve maximum performance?
What is the use case scenario that you only want to have a very small subset of entries to be used?
Please take a look at Tuning Considerations for the Directory in http://download-west.oracle.com/docs/cd/B14099_14/idmanage.1012/b14082/tuning.htm#i1004959 for some details.
You might want to take a look at the http://www.mindcraft.com benchmark to get some other infos.
regards,
--Olaf -
I was wondering what program is the best for getting precise info on the performance of my system, and also for getting ture reads of, say, how fast is my RAM really working or how much cache my HDD really has.
I'm starting to wonder if this kind of general posting is allowed in this forum, i post here because it's the place for the mobo i have.
emiliohi
you got sisoft sandra but she lies alot.
then you got performance test which do a test on the entire computer.
gfx harddrive cd memory and so on and can be saved to a file.
if you install that game i can do that to and save the result in a file so you have something to compare with.
i believe this is a question that should be in aus.
bye -
Solved!
Go to Solution.FAQ
Test1 comprises of three tests
1. Best Effort Test: -provides background information.
Download Speed
11747 Kbps
0 Kbps 21000 Kbps
Max Achievable Speed
Download speedachieved during the test was - 11747 Kbps
For your connection, the acceptable range of speedsis 4000-21000 Kbps.
Additional Information:
Your DSL Connection Rate :13086 Kbps(DOWN-STREAM), 1152 Kbps(UP-STREAM)
IP Profile for your line is - 9400 Kbps
The throughput of Best Efforts (BE) classes achieved during the test is - 14.76:22.75:62.5 (SBE:NBEBE)
These figures represent the ratio while sententiously passing Sub BE, Normal BE and Priority BE marked traffic.
The results of this test will vary depending on the way your ISP has decided to use these traffic classes.
2. Assured Rate Test: -provides background information.
Download Speed
9526 Kbps
0 Kbps 1600 Kbps
Max Achievable Speed
Download speed achieved during the test was - 9526 Kbps
For your connection, the acceptable range of speeds is 1536-1600 Kbps.
Additional Information:
Assured Rate IP profile on your line is - 1600 Kbps
3. Upstream Test: -provides background information.
Upload Speed
954 Kbps
0 Kbps 1152 Kbps
Max Achievable Speed
>Upload speed achieved during the test was - 954 Kbps
Additional Information:
Upstream Rate IP profile on your line is - 1152 Kbps
We were unable to identify any performance problem with your service at this time.
It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
Please visit FAQ section if you are unable To understand the test results. -
How to have continouse performance testing during development phase?
I understand that for corporate projects there are always requirements like roughy how long it can take for certain process.
Is there any rough guideline as how much time certain process will take?
And is there anyway i can have something like JMeter that will do constant monitor of the performance as i start development?
Can go down to method level, but should also be able to show total time taken for a certain module or action etc.
I think it is somthing like continuous integration like cruise control..but is more for performance continouse evaluation..
Any advice anyoneJust a thought: how useful would continuous performance testing be? First off, I wouldn't have the main build include performance tests. What if the build fails on performance? It isn't necessarily something you'll fix quickly, so you could be stuck with a broken build for quite some time, which means either your devs won't be committing code, or they'll be comitting code on a broken build which kind-of negates the point of CI. So you'd have a nightly build for performance, or something. Then what? Someone comes in in the morning and sees the performance build failed, and fixes it? Hmmm, maybe your corporate culture is different, but we've got a nightly metrics build that sits broken for weeks on end before someone looks at it. As long as the master builds are OK, nobody cares. Given that performance problems might well take several weeks of dedicated time to fix, I reckon they're far more likely to be fixed as a result of failing acceptance tests, rather than the CI environment reporting them
Just my opinions, of course -
Object performance,need detail of Oracle's internal performance testing
hi Geoff,
from your previous answer:
From an Oracle's comprehensive internal performance testing,
object implementation of an application consistently matches
those of relational implementation.
For some operations, using objects performed better. can you tell more about the result of the testing ?
we need to get the comparison between the two implementation,
in order to know in which case should use object implementation
and in which case should not.
thanks
RayRay,
Before I name some cases that objects perform better, I still
want to emphasize that your application object model comes
first. Here are the cases:
1. If you have nested objects (e.g., customer with address
attribute), querying the nested objects would work faster than
relational access.
2. If you have containment objects (e.g., customer with VARRAY
of phone numbers), querying these contained objects would also
work faster than relational.
What is your application? What does your object model look like?
The more specific your question is, the better I can answer it.
Regards,
Geoff
hi Geoff,
from your previous answer:
From an Oracle's comprehensive internal performance testing,
object implementation of an application consistently matches
those of relational implementation.
For some operations, using objects performed better. can you tell more about the result of the testing ?
we need to get the comparison between the two implementation,
in order to know in which case should use object implementation
and in which case should not.
thanks
Ray -
Hi
When i do the performance test, further diagnostics at http://speedtest.btwholesale.com/ I get the following
Your speed test has completed and the results are shown above, however during the test an error occurred while trying to retrieve additional details regarding your service. As a result we are unable to determine if the speed you received during the test is acceptable for your service. Please re-run the test if you require this additional information.
I'm using a HH5 (which seems to drop connection at times) and Infinity option 1.
Any ideas?
ThanksI think the extra step of looking up the profile is a lookup to some management database. It does seem to go wrong for people sometimes, usually just for a few hours or so, but sometimes for extended periods.
As far as I know (but ???) it doesn't have anything to do with the equipment in use; and there is not much you can do about it. At least with an HH5 you can look at the sync speed in your stats; the profile should be just a fraction below the sync speed. (96.79%?)
Maybe you are looking for
-
How can I convert a video I took with my ipad on the format IMG to a supported format for Youtube?
-
I have an I pad 2 . I have put things for sale on Craigslist but I cannot put pictures on the ad. Is there an app or something I need to be able to load the pictures to the ad. I also have an I phone4s same thing with it. Appreciate any info. Thanks
-
Credit memo (Customers) deletion
Hi One of the user created a credit memo for Customers and would like to delete the same as it is no longer required. Can any one suggest me whether we can delete the credit memo in PRD ? If yes, can you please expalin the process..! If no..! Is ther
-
Printing out French characters
Hi to everyone. I'm trying to make a small application that will help me learn french verbes. I have the verbes in a file (written with notepad on windows) and I want to print them usign System.out.println(); The problem is I'm in Greece, so my local
-
Technical ERP upgrade and impact on BI/BW
Hi Experts, is there some documentation with short BI guidelines to follow before, while and after performing an technical upgrade on an connected ERP system? What problems may occur? What prerequisites should be checked? What content and plug-in rel