Performance Testing While Caching?
Fellas,
I am doing performance optimization tasks on oracle DB 10G R2 running on Red Hat Linux.
The problem is that whenever I run the query twice or more, oracle caches it, and I can no Longer see the delays in execution times.
I tried to clear the cache (this is Dev environment) but query execution times behaved as if the execution plans are still cached.
Any insight, please?
Charlov wrote:
Fellas,
I am doing performance optimization tasks on oracle DB 10G R2 running on Red Hat Linux.
The problem is that whenever I run the query twice or more, oracle caches it, and I can no Longer see the delays in execution times.
I tried to clear the cache (this is Dev environment) but query execution times behaved as if the execution plans are still cached.
Any insight, please?If you are doing performance optimization, why do you not want caching? Don't you want to optimize the access to data, whether it is cached or not? User perceived execution time is a reason to investigate performance, but it is difficult to map a development execution time to real world performance. That's why people say things like minimize consistent gets or concentrate on logical I/O.
So if you are investigating how to maximize getting data from a disk to the SGA, because you have infrequent unique queries, do that. Multiblock reads or not bothering with the SGA may be your friend. If you have data that is getting flushed out of the SGA even though it is being accessed moderately frequently, check the SGA advisor and consider the KEEP pool. If you don't know where the caching is occurring - it could be the SGA, OS user buffers, controller cache, SAN cache - you need to either figure it out or ignore it.
Perhaps if you told us how exactly you are "doing performance optimization tasks" we could give better advice.
Oracle purposefully reuses execution plans - this avoids much worse performance problems from hard parsing creating new ones. There are situations where this is bad (google the bind peeking problem). If you want a new execution plan, give a different query. Comments are useful for that, as well as being something to look for in the sql area.
Similar Messages
-
Performance issue while opening the report
HI,
I am working BO XI R3.1.there is performance issue while opening the report in BO Solris Server but on window server it is compratively fast.
we have few reports which contains 5 fixed prompt 7 optional prompt.
out of 5 fixed prompt 3 prompt is static (it contains 3 -4 record only )which is coming from materlied view.
we have already use many thing for improve performance in report like-
1) Index Awareness
2) Aggregate Awareness
3) Array fatch size-250
3) Aray bind time -32767
4) Login time out -600
the issue is that before refresh opening the report iteslf taking time 1.30 min on BO solris server but same report taking time in BO window server 45 sec. even we import on others BO solris server it is taking same time as per old solris server(1.30 min).
when we close the trace in solris server than it is taking 1.15 sec time.it should not be intial phase it is not hitting more on database.so why it is taking that much time while opening the report.
could you please guide us where exectly problem is there and how we can improve performance for opening the report.In case the problem related to solris server so what would be and how can we rectify.
Incase any further input require for the same feel free to ask me.Hi Kumar,
If this is happening with all the reports then this issue seems to be due to firewall or security settings of Solaris OS.
Please try to lower down the security level in solaris and test for the issue.
Regards,
Chaitanya Deshpande -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
ActiveX Control recording but not playing back in a VS 2012 Web Performance Test
I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
Window 7 OS 64 bit
IE 8 recorded on 32 bit version
I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.Hi IndyJason,
Thank you for posting in MSDN forum.
As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
Reference:
https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
reasons.
For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
LabVIEW Embedded - Performance Testing - Different Platforms
Hi all,
I've done some performance testing of LabVIEW on various microcontroller development boards (LabVIEW Embedded for ARM) as well as on a cRIO 9122 Real-time Controller (LabVIEW Real-time) and a Dell Optiplex 790 (LabVIEW desktop). You may find the results interesting. The full report is attached and the final page of the report is reproduced below.
Test Summary
µC MIPS
Single Loop
Effective MIPS
Single Loop
Efficiency
Dual Loop
Effective MIPS
Dual Loop
Efficiency
MCB2300
65
31.8
49%
4.1
6%
LM3S8962
60
50.0
83%
9.5
16%
LPC1788
120
80.9
56%
12.0
8%
cRIO 9122
760
152.4
20%
223.0
29%
Optiplex 790
6114
5533.7
91%
5655.0
92%
Analysis
For microcontrollers, single loop programming can retain almost 100% of the processing power. Such programming would require that all I/O is non-blocking as well as use of interrupts. Multiple loop programming is not recommended, except for simple applications running at loop rates less than 200 Hz, since the vast majority of the processing power is taken by LabVIEW/OS overhead.
For cRIO, there is much more processing power available, however approximately 70 to 80% of it is lost to LabVIEW/OS overhead. The end result is that what can be achieved is limiting.
For the Desktop, we get the best of both worlds; extraordinary processing power and high efficiency.
Speculation on why LabVIEW Embedded for ARM and LabVIEW Real-time performance is so poor puts the blame on excessive context switch. Each context switch typically takes 150 to 200 machine cycles and these appear to be inserted for each loop iteration. This means that tight loops (fast with not much computation) consume enormous amounts of processing power. If this is the case, an option to force a context switch every Nth loop iteration would be useful.
Conclusion
LabVIEW Embedded
for ARM
LabVIEW Real-time for cRIO/sbRIO
LabVIEW Desktop for Windows
Development Environment Cost
High
Reasonable
Reasonable
Execution Platform Cost
Very low
Very High / High
Low
Processing Power
Low (current Tier 1)
Medium
Enormous
LabVIEW/OS efficiency
Low
Low
High
OEM friendly
Yes+
No
Yes
LabVIEW Desktop has many attractive features. This explain why LabVIEW Desktop is so successful and is the vast majority of National Instruments’ software sales (and consequently results in the vast majority of hardware sales). It is National Instruments’ flagship product and is the precursor to the other LabVIEW offerings. The execution platform is powerful, available in various form factors from various sources and is competitively priced.
LabVIEW Real-time on a cRIO/sb-RIO is a lot less attractive. To make this platform attractive the execution platform cost needs to be vastly decreased while increasing the raw processing power. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. A single plug-in board no larger than 75 x 50 mm (3” x 2”) with a single unit price under $180 would certainly make the sb-RIO a viable execution platform. The peripheral connectors would not be part of the board and would be accessible via a connector. A developer mother board could house the various connectors, but these are not needed when incorporated into the final product. The recently released Xilinx Zynq would be a great chip to use ($15 in volume, 2 x ARM Cortex A9 at 800 MHz (4,000 MIPS), FPGA fabric and lots more).
LabVIEW Embedded for ARM is very OEM friendly with development boards that are open source with circuit diagrams available. To make this platform attractive, new more capable Tier 1 boards will need to be introduced, mainly to counter the large LabVIEW/OS overhead. As before, these target boards would come from microcontroller manufacturers, thereby making them inexpensive and open source. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. What is required now is another Tier 1 boards (eg. DK-LM3S9D96 (ARM Cortex M3 80 MHz/96 MIPS)). Further Tier 1 boards should be targeted every two years (eg. BeagleBoard-xM (ARM Cortex A8 1000 MHz/2000 MIPS board)) to keep LabVIEW Embedded for ARM relevant.
Attachments:
LabVIEW Embedded - Performance Testing - Different Platforms.pdf 307 KBI've got to say though, it would really be good if NI could further develop the ARM embedded toolkit.
In the industry I'm in, and probably many others, control algorithm development and testing oocurs in labview. If you have a good LV developer or team, you'll end up with fairly solid, stable and tested code. But what happens now, once the concept is validated, is that all this is thrown away and the C programmers create the embedded code that will go into the real product.
The development cycle starts from scratch.
It would be amazing if you could strip down that code and deploy it onto ARM and expect it to not be too inefficient. Development costs and time to market go way down.. BUT, but especially in the industry I presently work in, the final product's COST is extremely important. (These being consumer products, chaper micro cheaper product) .
These concerns weight HEAVILY. I didn't get a warm fuzzy about the ARM toolkit for my application. I'm sure it's got its niches, but just imagine what could happen if some more work went into it to make it truly appealing to wider market... -
[Ann] FirstACT 2.2 released for SOAP performance testing
Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
the fifth release of the industry's first and most comprehensive automated performance
testing tool for Web Services.
As enterprise organizations are beginning to adopt Web Services, the types of Web
Services being developed and their testing needs is in a state of change. Major
software testing solution vendor, Empirix is committed to ensuring that organizations
developing enterprise software using Web Services can continue to verify the performance
of their enterprise as quickly and cost effectively as possible regardless of the
architecture they are built upon.
Working with organizations developing Web Services, we have observed several emerging
trends. First, organizations are tending to develop Web Services that transfer a
sizable amount of data within each transaction by passing in user-defined XML data
types as part of the SOAP request. As a result, they require a solution that automatically
generates SOAP requests using XML data types and allows them to be quickly customized.
Second, organizations require highly scalable test solutions. Many organizations
are using Web Services to exchange information between business partners and have
Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
Organizations need to performance test to these SLAs to avoid financial and business
penalties. Finally, many organizations just beginning to use automated testing tools
for Web Services have already made significant investments in making SOAP scripts
by hand. They would like to import SOAP requests into an automated testing tool
for regression testing.
Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
in Web Services testing by offering the following new functionality:
1. Automatic and customizable test script generation for XML data types – FirstACT
2.2 will generate complete test scripts and allow the user to graphically customize
test data without requiring programming. FirstACT now includes a simple-to-use XML
editor for data entry or more advanced SOAP request customization.
2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
performance test Web Services. Customers using FirstACT today regularly simulate
between several hundred to several thousand users. Empirix will guarantee to
performance test the numbers of users an organization needs to test to meet its business
needs.
3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
directly into the tool on a user-by-user basis. As a result, some users simulated
can import SOAP requests; others can be automatically generated by FirstACT.
Web Services facilitates the easy exchange of business-critical data and information
across heterogeneous network systems. Gartner estimates that 75% of all businesses
with more than $100 million in sales will have begun to develop Web Services applications
or will have deployed a production system using Web Services technology by the end
of 2002. As part of this move to Web Services, "vendors are moving forward with
the technology and architecture elements underlying a Web Services application model,"
Gartner reports. While this model holds exciting potential, the added protocol layers
necessary to implement it can have a serious impact on application performance, causing
delays in development and in the retrieval of information for end users.
"Today Web Services play an increasingly prominent but changing role in the success
of enterprise software projects, but they can only deliver on their promise if they
perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
its graphical user interface and extensive test-case generation capability, FirstACT
is the first Web Services testing tool that can be used by software developers or
QA test engineers. FirstACT tests the performance and functionality of Web Services
whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
the most comprehensive Web Services testing solution that meets or exceeds the changing
demands of organizations testing Web Services for performance, functionality, and
functionality under load.”
Learn more?
Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
Pricing starts at $4,995. For additional information, call (781) 993-8500.Simon,
I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
Thanks for the effort,
Bryan -
Perform Test in program ZTEST.(USERXXIT)
Hi,
I am calling form/endform Test from program ZTEST in to userexit
FORM USEREXIT_MOVE_FIELD_TO_VBAK.
Perform Test in program ZTEST.
UPDATE TAVRVC SET NAME = V_MNO.
ENDFORM.
but while testing/creating(VA01) it leads to shortdump/syntax error.
the syntax errors r varialbels, which i declared V_MNO as variableV_NO type C in in ZTEST program.
HOW TO AVOID this syntax error, and how to declare variables in ZTEST prog.
Thanks.
Rao.It seem the variable V_MNO is declared in ZTEST program. So its scope is within that program only. If you want the value to be conveyed back to the user exit body, from which you are calling the program, use parameters for the FORM TEST with CHANGING clause. Also you would need to declare V_MNO in the user exit with the same type as that is declared in ZTEST.
I hope this is clear. -
Performance test from unit tests
Hi;
I tried to apply the principle TDD at the level of perf test of an J2ee application and this from units tests.
I thus used the tool JUnitPerf but regrettably with this tool I must to modify my unit test for the update of the sequence of a test and it propose nobody management of waiting time between every call of a unit test.
Q : Know a tool more complete, stable and maintained which allows to write of the perf tests from unit test ?
Regards;Hi John,
The testing will depend on the scenario's configured in SAP, which are to be tested. As scenario vary from customer to customer, it is not possible to send a document with screen shot, as these are confidential documents. However, the write-up below should be able to give some clueon the same.
<b>Different Type of Testing are as under</b>:
Unit testing is done in bit and pieces. Like e.g. in SD standard order cycle; we do have 1-create order, then 2-delivery, then 3-transfer order, then 4-PGI and then 5-Invoice. So we will be testing 1,2,3,4 and 5 seperately alone one by one using test cases and test data. We will not be looking and checking/testing any integration between order and delivery; delivery and TO; TO and PGI and then invoice.
Whrereas System testing you will be testing the full cycle with it's integration, and you will be testing using test cases which give a full cyclic test from order to invoice.
Security testing you will be testing different roles and functionalities and will check and signoff.
Performance testing is refered to as how much time / second will take to perform some actions, like e.g. PGI. If BPP defination says 5 seconds for PGI then it should be 5 and not 6 second. Usually it is done using software.
Regression testing is reffered to a test which verfies that some new configuration doesnot adversly impact existing functionality. This will be done on each phase of testing.
User Acceptance Testing: Refers to Customer testing. The UAT will be performed through the execution of predefined business scenarios, which combine various business processes. The user test model is comprised of a sub-set of system integration test cases.
Regards,
Rajesh Banka
Reward suitable points.
How to give points: Mark your thread as a question while creating it. In the answers you get, you can assign the points by clicking on the stars to the left. You also get a point yourself for rewarding (one per thread). -
Hi, I have a customer who wants to performance test OID.
Their actual installed data will be 600,000 users, however they want to only query using a sample of 10-20 different usernames. My question is will caching within the database/and or ldap make the results erroneous.
Regards
KevinKevin,
what do you mean by '.. make the results erroneous'? If you're talking about a performance test you want to achieve the best possible result, right? So why don't you want to use either the DB cache or the OID server cache to achieve maximum performance?
What is the use case scenario that you only want to have a very small subset of entries to be used?
Please take a look at Tuning Considerations for the Directory in http://download-west.oracle.com/docs/cd/B14099_14/idmanage.1012/b14082/tuning.htm#i1004959 for some details.
You might want to take a look at the http://www.mindcraft.com benchmark to get some other infos.
regards,
--Olaf -
I was wondering what program is the best for getting precise info on the performance of my system, and also for getting ture reads of, say, how fast is my RAM really working or how much cache my HDD really has.
I'm starting to wonder if this kind of general posting is allowed in this forum, i post here because it's the place for the mobo i have.
emiliohi
you got sisoft sandra but she lies alot.
then you got performance test which do a test on the entire computer.
gfx harddrive cd memory and so on and can be saved to a file.
if you install that game i can do that to and save the result in a file so you have something to compare with.
i believe this is a question that should be in aus.
bye -
EA6500 Wireless Issues Performance Test and MTU setting
Just changed from DSL to Comcast Internet with a 25Mbps download service. Purchased the Comcast the Linksys cable modem to match the EA6500. Last night when testing doing some Netflix HD streaming while downloading Direct TV HD movie noticed that performance was not streaming HD in fact it was as bad as the slow DSL.
So tried the Smart Wifi Performance test several time and downloads were terrible 3.0 Mbps, however, when testing with other testers, speakeasy, speedtest performance was above 25 Mbps. Called support and they had no answer.
What is wrong with the speed test smart wifi? Why was I not getting HD streaming speeds? I shoulf have plenty of bandwidth.
Also I can seem to find the right MTU setting. Every packet amount I ping is working so I can not get a fragmented error.
I thought this router was advertised as a HD video router.
I have just about everything disabled including Media Priority.
Comments, Ideas, help,
Thank youHi!
To get the optimum HD streaming performance, you can try setting the following on the router's page :
- disable or turn off WMM Support under Media Prioritization.
- personalize the wireless settings, set different names on the 2.4 and 5 GHz networks.
Let the streaming devices connect to the 5Ghz network. -
A quick counting loop for a performance test
Hi all. I'm writing this small performance test using the add method of an arraylist. How would I wrap a for (or while) loop around this code so that it runs 1000 times, averages the total times of all the tests, and then prints the average out? Thanks.
startTime = System.nanoTime();
for(int i = 0; i< last.length; i++ ){
arrayListLast.add(last);
stopTime = System.nanoTime();
totalTime = stopTime-startTime;What part are you having trouble with?
Do you not know how to repeat something a certain number of times?
Do you not know how to compute an average?
Do you not know how to print something out? -
Hi
When i do the performance test, further diagnostics at http://speedtest.btwholesale.com/ I get the following
Your speed test has completed and the results are shown above, however during the test an error occurred while trying to retrieve additional details regarding your service. As a result we are unable to determine if the speed you received during the test is acceptable for your service. Please re-run the test if you require this additional information.
I'm using a HH5 (which seems to drop connection at times) and Infinity option 1.
Any ideas?
ThanksI think the extra step of looking up the profile is a lookup to some management database. It does seem to go wrong for people sometimes, usually just for a few hours or so, but sometimes for extended periods.
As far as I know (but ???) it doesn't have anything to do with the equipment in use; and there is not much you can do about it. At least with an HH5 you can look at the sync speed in your stats; the profile should be just a fraction below the sync speed. (96.79%?) -
Looking for a Performance testing tool
We want to stress test our multiserver Adobe Connect before putting into production. HP tool was not successful. Any Recommendations on tool and method?
Cameron,
Thanks for your input! Sounds like you are probably using these
products.
I saw the PerformAssure demo. Their GUI is very slick!! PerformAssure
does some cool stuff like comparing EJB metrics with system metrics
and were able to show me the EJB method that was causing a problem
with a petstore application.
I heard about this new tool from a company called Performant
(http://www.performant.com). I am signed up for a demo with these
folks. Will let you all know what I find.
I looked at Wily site and didn't persue them as they seem to be
deployable in the production environment. Our application is not live
yet, but we want to be able to make sure it can handle good load ;)
Thanks!
--N
"Cameron Purdy" <[email protected]> wrote in message news:<[email protected]>...
I haven't had a chance yet to look at Sitraka's software, but Precise
Indepth/J2EE and Wily Introscope are both pretty popular with Weblogic, and
I think they both do a good job of what they do.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Neil" <[email protected]> wrote in message
news:[email protected]..
Hello All!
I need help! I am trying to find out which of the tools is better
suited for me to be able to do performance testing of an application
under load.
I am looking for a tool that can provide me detailed application
performance metrics. Seems like both Insight and Perform Assure can do
this job for me.
If any of you have had experience with using performance tools such as
Prescise Insight or Sitraka's Perform Assure I would appreciate any
input you might have on your usage experience of these tools.
Also if you know of any other tools that might provide similar
information such as the above tools, that would be really helpful.
Thanks a bunch!
Neil. -
Problem download secure files performance test kit
Hi, I'm trying to download "secure files performance test kit" from here: http://www.oracle.com/technetwork/database/sftestkit-099298.html
From here it should only be to push the "download" label - but this is not working.
Instead I'm redirected to http://www.oracle.com/technetwork/indexes/products/index.html
where I can't find what I'm looking for.
Please help,
regards,
HaraldSecureFiles Performance Test Kit
The SecureFiles Performance Test Kit compares the performance of SecureFiles to older LOBs or BasicFiles. SecureFiles and LOB performance is measured using two of the popular database driver protocols, namely the JDBC OCI driver (Type II driver) and the JDBC Thin driver (pure Java, Type IV driver). In addition, the kit also compares performance under different caching and logging conditions.
Throughput is the metric used to measure performance for both reads and writes. The kit can be customized to run with different lob sizes, number of iterations, concurrent threads etc. The README file contains more information on setting up the kit and running the tests.
below this you can find the download button:- click the link
Download the SecureFiles Performance Test Kit.
Maybe you are looking for
-
No settings have knowingly been changed to Firefox or Facebook
-
Why is Modified photo smaller in data size than Original after red-eye redu
I'm using iPhoto 6 and was just about to go through and delete some unnecessary originals that are taking up lots of disk space, but then I noticed, all my originals are larger in file size than the modified images (example: Original = 1.2 MB, Modifi
-
Can't find attachments sent to me
Although I have designated the Downloads folder for my attachments, they never show up there. They are only inside my incoming email so if I delete the email, I lose the attachment. Unless I'm doing something wrong this is a very bad system.
-
Here is a strange on, all of a sudden on my iPhone 5S running 7, when I go to reply to a message that has to and cc addresses, the reply to all option does not show, just reply. Anyone have any ideas what is going on?
-
Apple Mail Welcome Page suddenly appeared
We had a blackout last night - (lasted about five minutes) - and upon rebooting Apple Mail is now "Welcoming" me to set up my email accounts. They've been set up for 9 years. How do I get access back to Apple Mail? There are six separate email accoun