Performance Test - Massive data generation
I would like to generate a massive quantity of date in SAP ERP, to be extracted by SAP BW, in an effort to create a baseline Volume of 1-2 TB of data for a performance Test Activity in BW.
We're investigating tools like Quest's DataFactory which generates date directly on the Oracle Tables.
Does anyone have experiences with such activities or scenarios.
Hi,
on the search for another tool i found this one
http://www.turbodata.ca/help/testdatageneratoroverview.htm
Claudius
Similar Messages
-
Flash SWF Performance Testing and Data
Looking for some recommendations on tools that can be used independent of the source (that is just on the swf) to check for Frames Per Second, total size and how often the movie loops.
Needed for Flash Banner ad quality assurance.The need was solved by writing my own AIR app to read the SWF file header.
http://www.lonhosford.com/lonblog/2011/01/17/read-flash-swf-header-in-air-with-parsley-fra mework/
You can drag a swf file and see the header data. As well you can load a swf from the web.
Enjoy. -
Massive data input for employees Transaction
Hi experts!
I need to perform a massive data input for employees. Iu2018ve tried with de Tx PA71 u2013 Fast entry of Time Data - but It needs a document or a manual preselecting.
Its necessary selecting ranges of personnel number in order to perform a massive data input for employees as a salary benefit or bonus for a group of employees.
Do you know that kind of transaction?
Thanks in advanced!Hi ,
You could also use T Code : SCAT/SCEM for mass upload or deload.
You may use below method to create SCAT in ECC (This is not a Direct method)
1) Run transaction SCEM
2) Enter a CATT name
3) Click on change, prompt appears saying it doesn't exist, do you want to
create. Click yes.
4) Enter the TCODE to record.
5) Execute the TCODE and save data.
6) Back out. Click End and copy button.
7) Double-click the TCD on the left side of screen
8) Click F5 or Field Inputs Variants
9) Use black down-arrow to step through screens.
10) Double click on fields to set variables. (SAP adds leading & to each
variable name).
11) Save and back out.
12) Go to menu path Environment --> Extended CATT.
13) Click change and change type from (M) Manual Test Case to (C) CATT.
14) Assign component.
15) Click save. Back out.
16) Execute SCAT and Go to --> Variants --> Export Default and save as text
file.
17) Edit file with new test data.
18) Execute CATT.
After step 6 you can also leave SCEM and run SCAT and perform the
parameterization there as usual.
Hope this helps.
Inputs for Neeraj. -
Show-stopper Connecting web performance test to remote DB2 server as data source
Hi All,
We are running TFS2010 on VS2010 Ultimate on 2003 server SP-2 with IE 8 for web applications.
We are working on conducting load tests in web-service. Previously we did that using data source from MS SQL to connect our web performance tests. Now we have a new requirement: changing our
data source from MS SQL server to DB2 for good. The DB2 server is in a separate network. I am struggling how I can connect to the DB2 server. For the beginner the firewall setup is being taken care of with a separate initative. So I am a bit startled on how
to connect to DB2 server from Visual Studio. Any help would be highly appreciated.
Please let me know how we can accomplish this goal. Appreciate a lot.
Thank you,
RizThanks for Javaman’s help.
Hi Riz,
Thank you for posting in the MSDN forum.
Actually the real issue in this thread is related to the DB2.
Since it is the third party software, sorry for that it is really out of support range of VS testing forum.
But you could post this issue to the following forum, there you would get better support for it.
https://www.ibm.com/developerworks/community/forums/html/category?id=33333333-0000-0000-0000-000000000019
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Test data generation tool?
Hi folks:
Can anyone suggest a test data generation tool? We need to load up and stress-test a product we're building. We need a tool that can generate an INSERT script for all tables with referentially intact data inserted in the right order to not violate constraints. I've used Quest's DataFactory before, and it's not very good. Is there a free/inexpensive (or at most, a reasonably priced) tool that anyone knows of?
Regards,
DaveHi,
on the search for another tool i found this one
http://www.turbodata.ca/help/testdatageneratoroverview.htm
Claudius -
Very slow performance in every area after massive data load
Hi,
I'm new to Siebel. I had a call from customer saying that virtually every aspect of the application (login, etc) is slow after they did a massive data loading ~ around 15GB of data.
Could you please help to point out what would be the best practice for this massive data loading exercise? All the table statistics are up to date.
Anyone encountered this kind of problem before?Hello,
Siebel CRM is a highly customizable customer relationship management solution. There are number of customizations (scripting, workflow, web services,...) and integrations (custom c++, java, ERP system,...) that can cause Siebel performance issues.
Germain Monitoring v1.8.5 can help you -clean-up- all your siebel performance issues (5 min after installation, which can take between 4hours and 10 days whether it is to be used against your siebel dev/qa or prod environment) and then monitor your siebel production system at every layer of your infrastructure, at the siebel user click & back-end transaction levels and either solve or identify the root-cause of siebel performance issues, 24x7.
Germain Monitoring Software (currently version 1.8.5) helps siebel customers 1)faster solve siebel performance issues introduced by customizations and 2)effectively solve siebel performance issues before business is impacted once siebel is on production.
Customers like NetApp, J.M Smucker, Alltel/Verizon,...have saved hundred of thousands of dollars using Germain Monitoring software.
Let us know whether you would like to discuss this further...good luck w/ these issues,
Regards,
Yannick Germain
GERMAIN SOFTWARE LLC
Siebel Performance Software
21 Columbus Avenue, Suite 221
San Francisco, CA 94111, USA
Cell: +1-415-606-3420
Fax: +1-415-651-9683
[email protected]
http://www.germainsoftware.com -
Hello,
Below I provide a complete code to re-produce the behavior I am observing. You could run it in tempdb or any other database, which is not important. The test query provided at the top of the script is pretty silly, but I have observed the same
performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
observed on my machine). Here are the steps with numbers corresponding to the numbers in the script:
1. Run script from #1 to #7. This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
2. Run test query (at the top of the script). Here are the execution statistics:
Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5514 ms,
elapsed time = 1389 ms.
3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
4. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 828 ms,
elapsed time = 392 ms.
As you can see the query is clearly faster. Yay for columnstore indexes!.. But let's continue.
5. Run script from #10 to #12 (note that this might take some time to execute). This will move about 80% of the data in both tables to a different partition. You should be able to see the fact that the data has been moved when running Step #
11.
6. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 8172 ms,
elapsed time = 3119 ms.
And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
I am not going to paste here execution plans or the detailed properties for each of the operators. They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
run (when all of the data resided on the same partition).
So the question is: why is it slower?
Thank you for any help!
Here is the code to re-produce this:
--==> Test Query - begin --<===
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT COUNT(1)
FROM Txns AS z WITH(NOLOCK)
LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
WHERE z.RecordStatus = 1
--==> Test Query - end --<===
--===========================================================
--1. Clean-up
IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
--2. Create partition funciton
CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
--3. Partition scheme
CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
--4. Create Main table
CREATE TABLE dbo.Main(
SetID int NOT NULL,
SubSetID int NOT NULL,
TxnID int NOT NULL,
ColBatchID int NOT NULL,
ColMadeId int NOT NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--5. Create Txns table
CREATE TABLE dbo.Txns(
TxnID int IDENTITY(1,1) NOT NULL,
GroupID int NULL,
SiteID int NULL,
Period datetime NULL,
Amount money NULL,
CreateDate datetime NULL,
Descr varchar(50) NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
-- 40 mln. rows - approx. 4 min
--6.1 Populate Main table
DECLARE @NumberOfRows INT = 40000000
INSERT INTO Main (
SetID,
SubSetID,
TxnID,
ColBatchID,
ColMadeID,
RecordStatus)
SELECT TOP (@NumberOfRows)
SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--6.2 Populate Txns table
-- 10 mln. rows - approx. 1 min
SET @NumberOfRows = 10000000
INSERT INTO Txns (
GroupID,
SiteID,
Period,
Amount,
CreateDate,
Descr,
RecordStatus)
SELECT TOP (@NumberOfRows)
GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--7. Add PK's
-- 1 min
ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Replace regular indexes with clustered columnstore indexes
--===========================================================
--8. Drop existing indexes
ALTER TABLE Txns DROP CONSTRAINT PK_Txns
DROP INDEX Main.CDX_Main
--9. Create clustered columnstore indexes (on partition scheme!)
-- 1 min
CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Move about 80% the data into a different partition
--===========================================================
--10. Update "RecordStatus", so that data is moved to a different partition
-- 14 min (32002557 row(s) affected)
UPDATE Main
SET RecordStatus = 2
WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
-- 4.5 min (7999999 row(s) affected)
UPDATE Txns
SET RecordStatus = 2
WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
--11. Check data distribution
SELECT
OBJECT_NAME(SI.object_id) AS PartitionedTable
, DS.name AS PartitionScheme
, SI.name AS IdxName
, SI.index_id
, SP.partition_number
, SP.rows
FROM sys.indexes AS SI WITH (NOLOCK)
JOIN sys.data_spaces AS DS WITH (NOLOCK)
ON DS.data_space_id = SI.data_space_id
JOIN sys.partitions AS SP WITH (NOLOCK)
ON SP.object_id = SI.object_id
AND SP.index_id = SI.index_id
WHERE DS.type = 'PS'
AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
ORDER BY 1, 2, 3, 4, 5;
PartitionedTable PartitionScheme IdxName index_id partition_number rows
Main PS_Scheme CDX_Main 1 1 7997443
Main PS_Scheme CDX_Main 1 2 32002557
Main PS_Scheme CDX_Main 1 3 0
Main PS_Scheme CDX_Main 1 4 0
Txns PS_Scheme PK_Txns 1 1 2000001
Txns PS_Scheme PK_Txns 1 2 7999999
Txns PS_Scheme PK_Txns 1 3 0
Txns PS_Scheme PK_Txns 1 4 0
--12. Update statistics
EXEC sys.sp_updatestats
--==> Run test Query --<===Hello Michael,
I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 251 ms, elapsed time = 128 ms.
As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
(coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation".
Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer -
[Ann] FirstACT 2.2 released for SOAP performance testing
Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
the fifth release of the industry's first and most comprehensive automated performance
testing tool for Web Services.
As enterprise organizations are beginning to adopt Web Services, the types of Web
Services being developed and their testing needs is in a state of change. Major
software testing solution vendor, Empirix is committed to ensuring that organizations
developing enterprise software using Web Services can continue to verify the performance
of their enterprise as quickly and cost effectively as possible regardless of the
architecture they are built upon.
Working with organizations developing Web Services, we have observed several emerging
trends. First, organizations are tending to develop Web Services that transfer a
sizable amount of data within each transaction by passing in user-defined XML data
types as part of the SOAP request. As a result, they require a solution that automatically
generates SOAP requests using XML data types and allows them to be quickly customized.
Second, organizations require highly scalable test solutions. Many organizations
are using Web Services to exchange information between business partners and have
Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
Organizations need to performance test to these SLAs to avoid financial and business
penalties. Finally, many organizations just beginning to use automated testing tools
for Web Services have already made significant investments in making SOAP scripts
by hand. They would like to import SOAP requests into an automated testing tool
for regression testing.
Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
in Web Services testing by offering the following new functionality:
1. Automatic and customizable test script generation for XML data types – FirstACT
2.2 will generate complete test scripts and allow the user to graphically customize
test data without requiring programming. FirstACT now includes a simple-to-use XML
editor for data entry or more advanced SOAP request customization.
2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
performance test Web Services. Customers using FirstACT today regularly simulate
between several hundred to several thousand users. Empirix will guarantee to
performance test the numbers of users an organization needs to test to meet its business
needs.
3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
directly into the tool on a user-by-user basis. As a result, some users simulated
can import SOAP requests; others can be automatically generated by FirstACT.
Web Services facilitates the easy exchange of business-critical data and information
across heterogeneous network systems. Gartner estimates that 75% of all businesses
with more than $100 million in sales will have begun to develop Web Services applications
or will have deployed a production system using Web Services technology by the end
of 2002. As part of this move to Web Services, "vendors are moving forward with
the technology and architecture elements underlying a Web Services application model,"
Gartner reports. While this model holds exciting potential, the added protocol layers
necessary to implement it can have a serious impact on application performance, causing
delays in development and in the retrieval of information for end users.
"Today Web Services play an increasingly prominent but changing role in the success
of enterprise software projects, but they can only deliver on their promise if they
perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
its graphical user interface and extensive test-case generation capability, FirstACT
is the first Web Services testing tool that can be used by software developers or
QA test engineers. FirstACT tests the performance and functionality of Web Services
whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
the most comprehensive Web Services testing solution that meets or exceeds the changing
demands of organizations testing Web Services for performance, functionality, and
functionality under load.”
Learn more?
Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
Pricing starts at $4,995. For additional information, call (781) 993-8500.Simon,
I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
Thanks for the effort,
Bryan -
How simulate massive archivelog generation
Hi,
We're in the process of testing our physical standby database, if the network link would be able to cope with peak load and archivelog generation during this time.
I'm looking for some sample script to run to simulate massive archivelog generation, mayb 1GB every minute. We're on 11.2.0.1 version.
The database is currently empty, no load , no data.
Regards,
dulauser13005731 wrote:
Hi,
We're in the process of testing our physical standby database, if the network link would be able to cope with peak load and archivelog generation during this time.
I'm looking for some sample script to run to simulate massive archivelog generation, mayb 1GB every minute. We're on 11.2.0.1 version.
The database is currently empty, no load , no data.
Regards,
dulaJust write a little PL/SQL procedure with a loop that iterates 10 million times, and inside the loop do a little dml - insert or update.
Pseudo code:
for i = 1 to 10000000
insert into testtable ('hello world');
next iThe above is not valid pl/sql, but demonstrates the process. I leave it to the student to pull out the PL/SQL manual found a tahiti.oracle.com and work out the exact syntax. If you don't have space to grow testtable by 10,000,000 rows, use an update instead. It will still generate the necessary redo. -
UI performance testing of pivot table
Hi,
I was wondering if anyone could direct me to a tool that I can use to do performance testing on a pivot table. I am populating a pivot table(declaratively) with a data source of over 100,000 cells and I need to record the browser rendering time of the pivot table using 50 or so parallel threads(requests). I tried running performance tests using JMeter, but that didn't help.
This is what I tried so far with JMeter:
I deployed the application in the integratedweblogicserver and specify the Url to hit in JMeter ( http://127.0.0.1:7101/PivotTableSample-ViewController-context-root/faces/Sample) and added a response assertion for the response code 200. Although I am able to hit the url successfully, the response I get is a javascript with a message that says "This is the loopback script to process the url before the real page loads. It introduces a separate round trip". When I checked in firebug, it looks like request redirect of some sort happens from this javascript to another Url (with some randomly generated parameters) which then returns the html response of the pivot table. I am unable to hit that Url directly as I get a message saying "session expired". It looks like a redirect happens from the first request and then session is created for that request and a redirect occurs.
I am able to check the browser rendering time of the pivot table in firebug (.net tab), but that is only for a single request. I'd appreciate it if anyone could guide me on this.
Thanks
NaveenI found the link below that explains configuration of JMeter for performance testing of ADF applications(Although I couldn't find a solution to figure out the browser rendering time for parallel threads).
http://one-size-doesnt-fit-all.blogspot.com/2010/04/configuring-apache-jmeter-specifically.html
Edited by: Naveen Ramanathan on Oct 3, 2010 10:24 AM -
Error while testing Generic Data Source extraction
I've created generic data source for Texts and Attributes in R/3 based on VIEW (Z table)
I get error "Error 6 in function module RSS_PROGRAM_GENERATE" while trying to test the data sources through RSO2
I guess there can be a lot of reasons... Please name me anybody who knows them...
thanx in advanceHi,
Please Check the OSS note 328948 and it provides solution to same error what you have. Additionally you can have a look into this OSS note 705212 too.
Syntax error in the generated extraction program Symptom
You notice the error by one of the following symptoms:
The loading of transaction data from an R/3 system terminates with a syntax error in the generated extraction program. The monitor displays the error messages:
R3027 "Error & during the generation of the data transfer program"
RSM340 "Error in the source system"
The extraction within the extractor checker terminates with error message RJ028 "Error 6 in function module RSS_PROGRAM_GENERATE".The activation of transfer rules ends in BW with error message RG102 "Syntax error in GP$ERR$, row... (-> long text)" from the source system. Usually, the diagnosis in the long text of the error message is: "...could not be interpreted. Possible error causes: Incorrect notation or... "
Other terms
OLTP, extractor, data extraction, DataSource, Service API, SAPI,
R3 027, R3 27, RSM 340, RJ 028, RJ 28
Reason and Prerequisites
The error only occurs in the source system, if this contains Basis Release 3.1I and Service API (SAPI) 3.0C Support Package 6. SAPI 3.0C Support Package 6 is contained, for example, in PI 2003.1 Support Package 7 (see attached composite SAP note 673002).
Solution
To correct the problem you need service API 3.0C Support Package 7 in the affected source system. The attached composite SAP note 704971 explains in which software components service API 3.0C is contained, and what the corresponding Support Packages of these components are.
Alternatively, you can also copy the advance correction from the appendix.
Hope it helps.
Regards -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
Using Test Setting file to run web performance tests in different environments
Hello,
I have a set of web performance tests that I want to be able to run in different environments.
Currently I have csv file containing the url of the load balancer of the particular environment I want to run the load test containing the web performance tests in, and to run it in a different environment I just edit this csv file.
Is it possible to use the test settings file to point the web performance tests at a particular environment?
I am using VSTS 2012 Ultimate.
ThanksInstead of using the testsettings I suggest using the "Parameterize web servers" command (found via context menu on the web test, or via one of the icons). The left hand column then suggests context parameter names for the parameterised web server
URLs. It should be possible to use data source entries instead. You may need to wrap the data source accesses in doubled curly braces if editing via the "Parameterize web servers" window.
Regards
Adrian -
Performance test on MVC application
Hi All,
By using Visual Studio Performance tool, how can we test ASP.Net MVC application.
When we open our MVC application basically it is loading very slow
Can we get the below information by using Visual Studio Performance tool.
1. How the controller is getting the input and passing it to the model for the retrieving the data.
2. How can we see the performance of the controller passing the input to the service and getting the information and passing it to the view
We want to the observe where exactly the delay is happening.
We also have many java script functions written in the code.
Please help me in providing the following information.
Thanks
SantoshHi Santosh,
Thanks for your post.
Generally, I know that the VS Performance tool: load test is used to expect usage of a software program by simulating multiple users who access the program at the same time.
The load test consist of a series of Web performance tests which operate under multiple simulated users over a period of time.
For more information:
https://msdn.microsoft.com/en-us/library/vstudio/dd293540(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/dn250793.aspx
Load tests provide named counter sets, they are useful when you analyze performance counter data. The counter sets include ASP.NET, so when you create a load test with the Load Test Wizard, you add an initial set of counters.
https://msdn.microsoft.com/en-us/library/ms404676.aspx?f=255&MSPPError=-2147217396
https://msdn.microsoft.com/en-us/library/ms404704.aspx
After you run this load test, you can use the load test analyzer to view your load test data and analyze your load test to locate bottlenecks, identify errors, and measure improvements in your application.
https://msdn.microsoft.com/en-us/library/ee923686.aspx
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Performance test planning: timing
Hi folks,
I'm wondering if anyone has suggestions or input on the timing of test planning specific to performance testing. In particular, how far in advance is it recommended that performance test planning would begin in reference to the intended test window(s)Hi Jack,
My preference is to start planning for a performance test as soon as possible, even before there is an application to test.
The first stage in planning should be an analysis of how the application will be used. This information can be gathered from a business plan if the application is new, or from site usage metrics from tools like Omniture or Web Trends if the application is presently deployed.
From the business plan or metrics you can begin to work out the key transactions (most heavily used and most business critical), as well as planning how users will execute those transaction (what percentages, what think times, etc).
Then as the test date gets closer and the application becomes available/stable/etc you can begin to flesh out the details of the plan. But the overall goals, analysis and high level planning can begin very early in the development cycle.
CMason
Senior Consultant - eLoadExpert
Empirix
Maybe you are looking for
-
I downloaded Oracle8.1.7 standard Edition and Oracle9iAS Core for Windows NT. I installed them on Windows 2000 Server. I checked that Oracle HTTP server was working. But how can I get more information from Internet about Oracle Application Server? I
-
Best accessories for uMBP -- Cleaning and Protecting
Hey guys, Just wondering what you guys use to clean your uMBP -- I've been using iKlear with the cloth they give us and I have no problem with that. Is there anything else that you guys use to clean the keys and casing? I only use the iKlear to clean
-
Mouse cursor question....
On pages, how do i type where the mouse cursor is? On Word you can double click and start typing where the mouse cursor is. Anyone have any ideas?? Thanks -Dave
-
I just did the software update for my Q10. Now, it is only displaying new calendar entries in my home time zone even when I create them while traveling and I properly designate the current time zone (note--with the new software it "defaults" to my ho
-
I'm working with an ISA jsp and have some questions about the <isa:iterate> tag. For example does it work with a JCO.Table object? and if so, what other kinds of tables can it use? Can anybody point me to some documentation on this tag library? Tha