Performance problem in custom transaction
Hi,
We have a custom transaction for checking the roles related to an transaction.
We have the same copy of the program in DEE, QAS and PRD(obvious).
The catch is in DEE the transaction runs very slow, whereas in QAS and PRD the transaction is normal.
In the debugger mode i found in DEE while accessing AGR_1016 it takes an long time.
I checked the indexes, tables and extents and noticed that in DEE the extents, size and the table size(physical size) is more than in PRD.
Whereas, the records are more in PRD and less in DEE.
Please guide me on what all checks i can perform to find out where exactly the problem is?
Regards,
Siddhartha.
Hi,
What database are you running? It sounds like that's where the issue is, if it's a table with a lot of changes a table reorg might be in order (especially with older databases). You should probably close this message and open another one in the relevant DB forum.
Michael
Similar Messages
-
Best Performance in my custom transaction
Hi,
I want ask a thing:
I'm trying to entry some materials via Purchase Order and i have simulate a inbound delivery access for custom transaction.
My operations are:
Creation of Batch
Split Position with Batch
Pack Furniture
Create OT for Forniture
EM of Forniture
The first 3 operation are Batch-Input of vl32n.
Between each operation I have insert a "WAIT UP to 4 second" to not raise exception but my transaction go very slow.
IF i go down the time of wait up the process doesn't work.
How I can improve the performance?
Thanks to all.WAIT UP to 4 second" to not raise exception
Why is that? You shouldn't use the WAIT or an ENQUEUE_SLEEP call unconditionally. Yuo don't know if it's necessary or even that the lag time is long enough. Check for the lock release using the enqueue function in a short DO loop. Calling 'WAIT' that many times is not very good either - each time you're forcing a roll in/roll out. -
Performance problem for mass transactions after upgrade from 4.7 to ECC6.0
Hi All,
After upgrade from 4.7 to ECC 6.0 (IS-U), mass transactions such as FPY1, FPVA, FP04M are taking very long time to complete. for example, before upgrade the jobs sceduled for FPVA transaction take around 5k-6k seconds. Whereas after upgrade the jobs for FPVA with the same variant takes around 9k-10k seconds. I am unable to figure out the cause for exponential increase in the duration of several mass-jobs (after the upgrade). Are there any SAP notes or do we need to do any customizing setting to solve this problem? Does anyone face this kind of problem?
Thanks in advance
TajHi,
This is normal after upgrade to 6.0, I have faced the same in all upgrades I've done and some others that I have involved also. If you did not requests and going live upgrade check I strongly recomend to schedule an Early Watch Check to minimize the impact. Times won´t be the same but can be very close if the system is tuned well. We have tuned systems that now run with good performance after this services. -
Performance problem with transaction log
We are having some performance problem in SAP BW 3.5 system running on MS SQL server 2000.The box is sized 63,574 MB. The transaction logs gets filled up after loading data in to a transactional cube or after doing selective deletion. The size of the transaction log is 7,587MB currently.
Basis team feels that when performing either loading or selective deletion, SQL server views it as a single transaction and doesn't commit until every record is written. And so as a result, transaction logs fills up ultimately bringing the system down.
The system log shows a DBIF error during the transaction log fill up as follows:
Database error 9002 at COM
> [9002] the log file for database 'BWP' is full. Back up the
> Transaction log for the database to free up some log space.
Function COMMIT on connection R/3 failed
Perform rollback
Can we make changes to Database to make commit action frequently? Is there any parameters we could change to reduce the packet size? Is there some setting to be changed in SQL server?
Any Help will be appreciated.if you have disk space avialable you can allocate more space to the transaction log.
-
Lauching Custom transaction thro' ITS : Problem in Saving entries
Hi,
I launched a custom transaction which in turn launch the VA01 transaction thro' Integrated ITS service. I could fill up entries for creating the Sales Order but unable to save it...the reason is the GUI status is not displayed in the displayed VA01 transaction.
How to achieve the same?
Thanks & Regards,
KanthimathiThe URL is specified below,
http://idphl603.phl.sap.corp:55080/sap/bc/gui/sap/its/zlaunch_va01/?sap-client=800&sap-language=EN
Here zlaunch_va01 is the service which launches the custom transaction which has a button named 'Create Sales Order', on selecting this button; the VA01 transcation gets launched but without the GUI Status, because of the same I am unable to save the filled entries. -
URGENT------MB5B : PERFORMANCE PROBLEM
Hi,
We are getting the time out error while running the transaction MB5B. We have posted the same to SAP global support for further analysis, and SAP revrted with note 1005901 to review.
The note consists of creating the Z table and some Z programs to execute the MB5B without time out error, and SAP has not provided what type logic has to be written and how we can be addressed this.
Could any one suggest us how can we proceed further.
Note as been attached for reference.
Note 1005901 - MB5B: Performance problems
Note Language: English Version: 3 Validity: Valid from 05.12.2006
Summary
Symptom
o The user starts transaction MB5B, or the respective report
RM07MLBD, for a very large number of materials or for all materials
in a plant.
o The transaction terminates with the ABAP runtime error
DBIF_RSQL_INVALID_RSQL.
o The transaction runtime is very long and it terminates with the
ABAP runtime error TIME_OUT.
o During the runtime of transaction MB5B, goods movements are posted
in parallel:
- The results of transaction MB5B are incorrect.
- Each run of transaction MB5B returns different results for the
same combination of "material + plant".
More Terms
MB5B, RM07MLBD, runtime, performance, short dump
Cause and Prerequisites
The DBIF_RSQL_INVALID_RSQL runtime error may occur if you enter too many
individual material numbers in the selection screen for the database
selection.
The runtime is long because of the way report RM07MLBD works. It reads the
stocks and values from the material masters first, then the MM documents
and, in "Valuated Stock" mode, it then reads the respective FI documents.
If there are many MM and FI documents in the system, the runtimes can be
very long.
If goods movements are posted during the runtime of transaction MB5B for
materials that should also be processed by transaction MB5B, transaction
MB5B may return incorrect results.
Example: Transaction MB5B should process 100 materials with 10,000 MM
documents each. The system takes approximately 1 second to read the
material master data and it takes approximately 1 hour to read the MM and
FI documents. A goods movement for a material to be processed is posted
approximately 10 minutes after you start transaction MB5B. The stock for
this material before this posting has already been determined. The new MM
document is also read, however. The stock read before the posting is used
as the basis for calculating the stocks for the start and end date.
If you execute transaction MB5B during a time when no goods movements are
posted, these incorrect results do not occur.
Solution
The SAP standard release does not include a solution that allows you to
process mass data using transaction MB5B. The requirements for transaction
MB5B are very customer-specific. To allow for these customer-specific
requirements, we provide the following proposed implementation:
Implementation proposal:
o You should call transaction MB5B for only one "material + plant"
combination at a time.
o The list outputs for each of these runs are collected and at the
end of the processing they are prepared for a large list output.
You need three reports and one database table for this function. You can
store the lists in the INDX cluster table.
o Define work database table ZZ_MB5B with the following fields:
- Material number
- Plant
- Valuation area
- Key field for INDX cluster table
o The size category of the table should be based on the number of
entries in material valuation table MBEW.
Report ZZ_MB5B_PREPARE
In the first step, this report deletes all existing entries from the
ZZ_MB5B work table and the INDX cluster table from the last mass data
processing run of transaction MB5B.
o The ZZ_MB5B work table is filled in accordance with the selected
mode of transaction MB5B:
- Stock type mode = Valuated stock
- Include one entry in work table ZZ_MB5B for every "material +
valuation area" combination from table MBEW.
o Other modes:
- Include one entry in work table ZZ_MB5B for every "material +
plant" combination from table MARC
Furthermore, the new entries in work table ZZ_MB5B are assigned a unique
22-character string that later serves as a key term for cluster table INDX.
Report ZZ_MB5B_MONITOR
This report reads the entries sequentially in work table ZZ_MB5B. Depending
on the mode of transaction MB5B, a lock is executed as follows:
o Stock type mode = Valuated stock
For every "material + valuation area" combination, the system
determines all "material + plant" combinations. All determined
"material + plant" combinations are locked.
o Other modes:
- Every "material + plant" combination is locked.
- The entries from the ZZ_MB5B work table can be processed as
follows only if they have been locked successfully.
- Start report RM07MLBD for the current "Material + plant"
combination, or "material + valuation area" combination,
depending on the required mode.
- The list created is stored with the generated key term in the
INDX cluster table.
- The current entry is deleted from the ZZ_MB5B work table.
- Database updates are executed with COMMIT WORK AND WAIT.
- The lock is released.
- The system reads the next entry in the ZZ_MB5B work table.
Application
- The lock ensures that no goods movements can be posted during
the runtime of the RM07MLBD report for the "material + Plant"
combination to be processed.
- You can start several instances of this report at the same
time. This method ensures that all "material + plant"
combinations can be processed at the same time.
- The system takes just a few seconds to process a "material +
Plant" combination so there is just minimum disruption to
production operation.
- This report is started until there are no more entries in the
ZZ_MB5B work table.
- If the report terminates or is interrupted, it can be started
again at any time.
Report ZZ_MB5B_PRINT
You can use this report when all combinations of "material + plant", or
"material + valuation area" from the ZZ_MB5B work table have been
processed. The report reads the saved lists from the INDX cluster table and
adds these individual lists to a complete list output.
Estimated implementation effort
An experienced ABAP programmer requires an estimated three to five days to
create the ZZ_MB5B work table and these three reports. You can find a
similar program as an example in Note 32236: MBMSSQUA.
If you need support during the implementation, contact your SAP consultant.
Header Data
Release Status: Released for Customer
Released on: 05.12.2006 16:14:11
Priority: Recommendations/additional info
Category: Consulting
Main Component MM-IM-GF-REP IM Reporting (no LIS)
The note is not release-dependent.
Thanks in advance.
Edited by: Neliea on Jan 9, 2008 10:38 AM
Edited by: Neliea on Jan 9, 2008 10:39 AMbefore you try any of this try working with database-hints as described in note 921165, 902157, 918992
-
Hi, I'm having a performance problem I cannot solve, so I'm hoping anyone here has some advice.
Our system consists of a central database server, and many applications (on various servers) that connect to this database.
Most of these applications are older, legacy applications.
They use a special mechanism for concurrency that was designed, also a long time ago.
This mechanism requires the applications to lock a specific table with a TABLOCKX before making any changes to the database (in any table).
This is of course quite nasty as it essentially turns the database into single user mode, but we are planning to phase this mechanism out.
However, there is too much legacy code to simply migrate everything right away, so for the time being we're stuck with it.
Secondly, this central database may not become too big because the legacy applications cannot cope with large amounts of data.
So, a 'cleaning' mechanism was implemented to move older data from the central database to a 'history' database.
This cleaning mechanism was created in newer technology; C# .NET.
It is actually a CLR stored procedure that is called from a SQL job that runs every night on the history server.
But, even though it is new technology, the CLR proc *must* use the same lock as the legacy applications because they all depend on the correct use of this lock.
The cleaning functionality is not a primary process, so it does not have high priority.
This means that any other application should be able to interrupt the cleaning and get handled by SQL first.
Therefore, we designed the cleaning so that it cleans as few data as possible per transaction, so that any other process can get the lock right after committing each cleaning transaction.
On the other hand, this means that we do have a LOT of small transactions, and because each transaction is distributed, they are expensive and so it takes a long time for the cleaning to complete.
Now, the problem: when we start the cleaning process, we notice that the CPU shoots up to 100%, and other applications are slowed down excessively.
It even caused one of our customer's factories to shut down!
So, I'm wondering why the legacy applications cannot interrupt the cleaning.
Every cleaning transaction takes about 0.6 seconds, which may seem long for a single transaction, but this also means any other application should be able to interrupt within 0,6 seconds.
This is obviously not the case; it seems the cleaning session 'steals' all of the CPU, or for some reason other SQL sessions are not handled anymore, or at least with serious delays.
I was thinking about one of the following approaches to solve this problem:
1) Add waits within the CLR procedure, after each transaction. This should lower the CPU, but I have no idea whether this will allow the other applications to interrupt sooner. (And I really don't want the factory to shut down twice!)
2) Try to look within SQL Server (using DMV's or something) and see whether I can determine why the cleaning session gets more CPU. But I don't know whether this information is available, and I have no experience with DMV's.
So, to round up:
- The total cleaning process may take a long time, as long as it can be interrupted ASAP.
- The cleaning statements themselves are already as simple as possible so I don't think more query tuning is possible.
- I don't think it's possible to tell SQL Server that specific SQL statements have low priority.
Can anyone offer some advice on how to handle this?
Regards,
StefanHi Stefan,
Have you solved this issue after follow up Bob's suggestion? I’m writing to follow up with you on this post, please let us know how things go.
If you have any feedback on our support, please click
here.
Elvis Long
TechNet Community Support -
Performance problem - XML and Hashtable
Hello!
I have a strange (or not strange, we don't know) performance problem. The situation is simple: our application runs on Oracle IAS, and gets XML messages from an MQ, and we have to process them.
Sample message:
<RequestDeal RequestId="RequestDeal-20090209200010-998493885" CurrentFragment="0" EndFragment="51" Expires="2009-03-7T08:00:10">
<Deal>
<Id>385011</Id>
<ModifyDate>2009-02-09</ModifyDate>
<PastDuesOnly>false</PastDuesOnly>
</Deal>
<Deal>
<Id>385015</Id>
<ModifyDate>2009-02-09</ModifyDate>
<PastDuesOnly>false</PastDuesOnly>
</Deal>
</RequestDeal>(there's an average of 50000-80000 deals in the whole message)
After the application receives the whole message, we'd like to parse it, and put the values into a hashtable for further processing:
Hashtable ht = new Hashtable();
Node eDeals = getElementNode(docReq, "/RequestDeal");
Node eDeal = (Element)(eDeals.getFirstChild());
long start = System.currentTimeMillis();
while (eDeal != null) {
String id = getElementText((Element)eDeal, "Id");
Date modifyDate = getElementDateValue((Element)eDeal, "ModifyDate");
boolean szukitett = getElementBool((Element)eDeal, "PastDuesOnly", false);
ht.put(id, new UgyletStatusz(id, modifyDate, szukitett));
eDeal = eDeal.getNextSibling();
logger.info("Request adatok betöltve.");
long end = System.currentTimeMillis();
logger.info("Hosszu muvelet ideje: " + (end - start) + "ms");The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
Do you have any idea, what can be the cause of the problem?gyulaur wrote:
Hello!
The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
Do you have any idea, what can be the cause of the problem?All sorts of things are possible.
- MQ is configured differently. For instance transactional versus non-transactional.
- The customer uses a network (multiple computers) and you use a box (singlur) and something is using a lot of bandwidth on the network. Could be your process, one of the dependent processes or something else that has nothing to do with your system
- The processing computer is loaded with something else or something is artificially limiting the processing time of your application (there is a app that allows one to limit another app to one CPU.)
- At least one version of 1.4 had a xml bug that consumed massive amounts of memory when processing large xml. Sound familar? If the physical memory is not up to it then it will be thrashing the hard drive as it swaps virtual in and out.
- Of course virtual memory causing swaps would impact it regardless of the cause.
- The database is loaded.
You can of course at least get the same version of java that they are using. Actually that would seem like a rather good idea to me regardless. -
MDM ABAP API Performance Problems
Hi all,
we developed a custom transaction into our ECC 6.0 system to remotely retreive data coming from MDM (7.1 SP05) and show them to the user. The 2 systems phisically reside in different datacenters, so running in the same WAN but in 2 different AD domains and 2 different ip networks.
In short the transaction is working using the standard MDM ABAP API functionalities and getting the MDM data using the following methods:
CALL METHOD lr_api->mo_accessor->connect
CALL METHOD lr_api->mo_core_service->query
CALL METHOD lr_api->mo_core_service->retrieve
CALL METHOD lr_api->mo_accessor->disconnect.
This is working, but with awful performances. for example to get a subset of materials (around 500 codes) it takes more than 1 minute, and the quantity of data transfered from MDM to ECC (around 30 KB) is not justifying this time.
Please be so kind to suggest any kind of activity that I can perform to improve the situation.
Thanks in advance.I am trying to retreieve date from MDM to ECC using ABAP API.I am getting the below dump.
Short text
An exception occurred that was not caught.
What happened?
The exception 'CX_MDM_PROVIDER' was raised, but it was not
along
the call hierarchy.
Since exceptions represent error situations and this error
adequately responded to, the running ABAP program
'CL_MDM_PROVIDER_71_SP00_PL00==CP' has to be
terminated.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can loo
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_MDM_PROVIDER', was not caught in
procedure "GET_LOC_SUPP" "(METHOD)", nor was it propagated by a RAISING clause.
Since the caller of the procedure could not have anticipated that the
exception would occur, the current program is terminated.
The reason for the exception is:
Internal error: field 'SEARCH_GROUPS' not found; contact your system
administrator
Please check at this point I am getting dump:
LT_SEARCH_TUPLE_PATH
Table IT_1806[0x12]
\CLASS=CL_MDM_PROVIDER_71_SP00_PL00\METHOD=IF_MDM_CORE_SERVICES~QUERY\DATA=LT_SEARCH_TUPLE_PAT
Table reference: 1612
TABH+ 0(20) = 000000000000000007000000579D710000000000
TABH+ 20(20) = 0000064C0000070E000000000000000CFFFFFFFF
TABH+ 40(16) = 04000138000F44A0000424E403000000
store = 0x0000000000000000
ext1 = 0x07000000579D7100
shmId = 0 (0x00000000)
id = 1612 (0x0000064C)
label = 1806 (0x0000070E)
fill = 0 (0x00000000)
leng = 12 (0x0000000C)
loop = -1 (0xFFFFFFFF)
xtyp = TYPE#000300
occu = 4 (0x00000004)
accKind = 1 (ItAccessStandard)
idxKind = 0 (ItIndexNone)
uniKind = 2 (ItUniNo)
keyKind = 1 (default)
cmpMode = 12 (ILLEGAL)
occu0 = 1
stMode = 0
groupCntl = 0
rfc = 0
unShareable = 0
mightBeShared = 0
sharedWithShmTab = 0
isShmLockId = 0
isUsed = 1
isCtfyAble = 1
hasScndKeys = 0
hasRowId = 0
scndKeysOutdated = 0
scndUniKeysOutdated = 0
LV_MESS
field 'SEARCH_GROUPS' not found
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
6666622544544545455522667266766222222222222222222222222222222222222222222222222222222222222222
695C407351238F72F50370EF406F5E4000000000000000000000000000000000000000000000000000000000000000
SY-REPID
I am picking the data from MAIN TABLE MDM_SUPPLIER_MAP and below is the code.
CALL METHOD lv_mdm_api->mo_core_service->query
EXPORTING
*Main Table
iv_object_type_code = 'MDM_SUPPLIER_MAP' "#EC NOTEXT
* it_query = lt_query
IMPORTING
et_result_set = lt_result_set.
Please suggest solution to get the records from MDM to ECC.
When I am tring to replace the abpove with the below code it is working.But in this case I need to retreive all the data instead of using DO and ENDO.Could any please suggest solution.
DO 10 TIMES.
APPEND sy-index TO keys.
ENDDO.
CALL METHOD lv_mdm_api->mo_core_service->retrieve_simple
EXPORTING
iv_object_type_code = 'MDM_SUPPLIER_MAP'
it_keys = keys
IMPORTING
et_ddic_structure = result_ddic -
Custom Transaction running slow on R/3 4.7 ,Oracle 10.2.0.4
Hi,
We have a custom transaction for checking the roles related to an transaction.
We have the same copy of the program in DEE, QAS and PRD(obvious).
The catch is in DEE the transaction runs very slow, whereas in QAS and PRD the transaction is normal.
In the debugger mode i found in DEE while accessing AGR_1016 it takes an long time.
I checked the indexes, tables and extents and noticed that in DEE the extents, size and the table size(physical size) is more than in PRD.
Whereas, the records are more in PRD and less in DEE.
Please guide me on what all checks i can perform to find out where exactly the problem is?
Regards,
Siddhartha.Hi,
Thanks for the reply.
The details are given below.
Please let me know if u need any more information.
/* THE ACCESS PATH */
SELECT STATEMENT ( Estimated Costs = 2 , Estimated #Rows = 1 )
5 7 FILTER
Filter Predicates
5 6 NESTED LOOPS
( Estim. Costs = 1 , Estim. #Rows=1 )
Estim. CPU-Costs = 14,882 Estim. IO-Costs = 1
5 4 NESTED LOOPS
( Estim. Costs = 1 , Estim. #Rows = 1 )
Estim. CPU-Costs = 14,660 Estim. IO-Costs = 1
1 INDEX RANGE SCAN UST12~0
( Estim. Costs = 1 , Estim. #Rows = 1 )
Search Columns: 4
Estim. CPU-Costs = 9,728 Estim. IO-Costs = 1
Access Predicates Filter Predicates
5 3 TABLE ACCESS BY INDEX ROWID AGR_1016
( Estim. Costs = 1 , Estim. #Rows = 4 )
Estim. CPU-Costs = 4,932 Estim. IO-Costs = 1
2 INDEX RANGE SCAN AGR_1016^0
Search Columns: 2
Estim. CPU-Costs = 3,466 Estim. IO-Costs = 0
Access Predicates Filter Predicates
5 INDEX UNIQUE SCAN UST10S~0
Search Columns: 5
Estim. CPU-Costs = 221 Estim. IO-Costs = 0
Access Predicates Filter Predicates
The details of the index UST12~0
UNIQUE Index UST12~0
Column Name #Distinct
MANDT 7
OBJCT 1,339
AUTH 6,274
AKTPS 2
FIELD 762
VON 6,112
BIS 246
Last statistics date 02.08.2008
Analyze Method Sample 80,739 Rows
Levels of B-Tree 2
Number of leaf blocks 9,588
Number of distinct keys 722,803
Average leaf blocks per key 1
Average data blocks per key 1
Clustering factor 42,765
Table AGR_1016
Last statistics date 24.09.2008
Analyze Method Sample 5,102 Rows
Number of rows 5,102
Number of blocks allocated 51
Number of empty blocks 0
Average space 2,612
Chain count 0
Average row length 48
Partitioned NO
NONUNIQUE Index AGR_1016~001
Column Name #Distinct
MANDT 4
PROFILE 4,848
COUNTER 10
Last statistics date 24.09.2008
Analyze Method Sample 5,102 Rows
Levels of B-Tree 1
Number of leaf blocks 24
Number of distinct keys 5,101
Average leaf blocks per key 1
Average data blocks per key 1
Clustering factor 1,449
UNIQUE Index AGR_1016^0
Column Name #Distinct
MANDT 4
AGR_NAME 4,725
COUNTER 10
Last statistics date 24.09.2008
Analyze Method Sample 5,102 Rows
Levels of B-Tree 1
Number of leaf blocks 30
Number of distinct keys 5,102
Average leaf blocks per key 1
Average data blocks per key 1
Clustering factor 1,410
If any more info is required..Please tell me.
Siddhartha -
Performance problem in ABAP code
hai guys,
I created report using tables like bsis,t001 etc,( tax report).
I have performance problem in this report.
COuld you pls tell me how to analyse the report and find out the place where process is taking more memory etc.
i did abap trace and runtime analysis..but could not find out exact point.
how to do this..
i want to analysis each subroutine,internal table and query process.
could you pls give me some ideas.
ambichanThere is an excellent tool available in SAP - <b>Code Inspector.
</b>
Transaction is SCII
Try the following link and I am sure you will find a bunch of useful documents.
<a href="http://www.google.co.in/search?hl=en&safe=off&q=site%3Asdn.sap.comfiletype%3ApdfCode+Inspector&btnG=Search&meta=">ABAP Performance</a>
I use the Code Inspector to search for
a) All the select statements which are present within the loop
b) Nested Loops
c) Select query without providing criteria for primary keys, depending upon situation
d) Can the search be narrowed with extra conditions
e) Using READ .. BINARY SEARCH if internal table has lots of records.
The list is actually endless, but this is something to start with.
You can actually have a checklist, and depending upon it, go through your code. The more you adhere to checklist, you will find that, the performance would dramatically improve.
Also use <b>ST05</b> transaction, for SQL Trace and find out which select query is taking the maximum time for response.
Regards,
Subramanian V. -
Report program Performance problem
Hi All,
one object is taking 30hr for executing.some one develped this in 1998 but this time it is a big Performance problem.please some one helep what to do i am giving that code.
*--DOCUMENTATION--
Programe written by : 31.03.1998 .
Purpose : this programe updates the car status into the table zsdtab1
This programe is to be schedule in the backgroud periodically .
Querries can be fired on the table zsdtab1 to get the details of the
Car .
This programe looks at the changes made in the material master from
last updated date and the new entries in material master and updates
the tables zsdtab1 .
Changes in the Sales Order are not taken into account .
To get a fresh data set the value of zupddate in table ZSTATUS as
01.01.1998 . All the data will be refreshed from that date .
Program Changed on 23/7/2001 after version upgrade 46b by jyoti
Addition of New tables for Ibase
tables used -
tables : mara , " Material master
ausp , " Characteristics table .
zstatus , " Last updated status table .
zsdtab1 , " Central database table to be maintained .
vbap , " Sales order header table .
vbak , " Sales order item table .
kna1 , " Customer master .
vbrk ,
vbrp ,
bkpf ,
bseg ,
mseg ,
mkpf ,
vbpa ,
vbfa ,
t005t . " Country details tabe .
--NEW TABLES ADDEDFOR VERSION 4.6B--
tables : ibsymbol ,ibin , ibinvalues .
data : vatinn like ibsymbol-atinn , vatwrt like ibsymbol-atwrt ,
vatflv like ibsymbol-atflv .
*--types definition--
types : begin of mara_itab_type ,
matnr like mara-matnr ,
cuobf like mara-cuobf ,
end of mara_itab_type ,
begin of ausp_itab_type ,
atinn like ausp-atinn ,
atwrt like ausp-atwrt ,
atflv like ausp-atflv ,
end of ausp_itab_type .
data : mara_itab type mara_itab_type occurs 500 with header line ,
zsdtab1_itab like zsdtab1 occurs 500 with header line ,
ausp_itab type ausp_itab_type occurs 500 with header line ,
last_date type d ,
date type d .
data: length type i.
clear mara_itab . refresh mara_itab .
clear zsdtab1_itab . refresh zsdtab1_itab .
select single zupddate into last_date from zstatus
where programm = 'ZSDDET01' .
select matnr cuobf into (mara_itab-matnr , mara_itab-cuobf) from mara
where mtart eq 'FERT' or mtart = 'ZCBU'.
where MATNR IN MATERIA
and ERSDA IN C_Date
and MTART in M_TYP.
append mara_itab .
endselect .
loop at mara_itab.
clear zsdtab1_itab .
zsdtab1_itab-commno = mara_itab-matnr .
Get the detailed data into internal table ausp_itab .----------->>>
clear ausp_itab . refresh ausp_itab .
--change starts--
select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
ausp_itab-atflv) from ausp
where objek = mara_itab-matnr .
append ausp_itab .
endselect .
clear ausp_itab .
select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
ausp_itab-atflv) from ibin as a inner join ibinvalues as b
on ain_recno = bin_recno
inner join ibsymbol as c
on bsymbol_id = csymbol_id
where a~instance = mara_itab-cuobf .
append ausp_itab .
endselect .
----CHANGE ENDS HERE -
sort ausp_itab by atwrt.
loop at ausp_itab .
clear date .
case ausp_itab-atinn .
when '0000000094' .
zsdtab1_itab-model = ausp_itab-atwrt . " model .
when '0000000101' .
zsdtab1_itab-drive = ausp_itab-atwrt . " drive
when '0000000095' .
zsdtab1_itab-converter = ausp_itab-atwrt . "converter
when '0000000096' .
zsdtab1_itab-transmssn = ausp_itab-atwrt . "transmission
when '0000000097' .
zsdtab1_itab-colour = ausp_itab-atwrt . "colour
when '0000000098' .
zsdtab1_itab-ztrim = ausp_itab-atwrt . "trim
when '0000000103' .
*=========Sujit 14-Mar-2006
IF AUSP_ITAB-ATWRT(3) EQ 'WDB' OR AUSP_ITAB-ATWRT(3) EQ 'WDD'
OR AUSP_ITAB-ATWRT(3) EQ 'WDC' OR AUSP_ITAB-ATWRT(3) EQ 'KPD'.
ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT+3(14).
ELSE.
ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT . "chassis no
ENDIF.
zsdtab1_itab-chassis_no = ausp_itab-atwrt . "chassis no
*=========14-Mar-2006
when '0000000166' .
----25.05.04
length = strlen( ausp_itab-atwrt ).
if length < 15. "***aded by patil
zsdtab1_itab-engine_no = ausp_itab-atwrt . "ENGINE NO
else.
zsdtab1_itab-engine_no = ausp_itab-atwrt+13(14)."Aded on 21.05.04 patil
endif.
----25.05.04
when '0000000104' .
zsdtab1_itab-body_no = ausp_itab-atwrt . "BODY NO
when '0000000173' . "21.06.98
zsdtab1_itab-cockpit = ausp_itab-atwrt . "COCKPIT NO . "21.06.98
when '0000000102' .
zsdtab1_itab-dest = ausp_itab-atwrt . "destination
when '0000000105' .
zsdtab1_itab-airbag = ausp_itab-atwrt . "AIRBAG
when '0000000110' .
zsdtab1_itab-trailer_no = ausp_itab-atwrt . "TRAILER_NO
when '0000000109' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-fininspdat = date . "FIN INSP DATE
when '0000000108' .
zsdtab1_itab-entrydate = ausp_itab-atwrt . "ENTRY DATE
when '0000000163' .
zsdtab1_itab-regist_no = ausp_itab-atwrt . "REGIST_NO
when '0000000164' .
zsdtab1_itab-mech_key = ausp_itab-atwrt . "MECH_KEY
when '0000000165' .
zsdtab1_itab-side_ab_rt = ausp_itab-atwrt . "SIDE_AB_RT
when '0000000171' .
zsdtab1_itab-side_ab_lt = ausp_itab-atwrt . "SIDE_AB_LT
when '0000000167' .
zsdtab1_itab-elect_key = ausp_itab-atwrt . "ELECT_KEY
when '0000000168' .
zsdtab1_itab-head_lamp = ausp_itab-atwrt . "HEAD_LAMP
when '0000000169' .
zsdtab1_itab-tail_lamp = ausp_itab-atwrt . "TAIL_LAMP
when '0000000170' .
zsdtab1_itab-vac_pump = ausp_itab-atwrt . "VAC_PUMP
when '0000000172' .
zsdtab1_itab-sd_ab_sn_l = ausp_itab-atwrt . "SD_AB_SN_L
when '0000000174' .
zsdtab1_itab-sd_ab_sn_r = ausp_itab-atwrt . "SD_AB_SN_R
when '0000000175' .
zsdtab1_itab-asrhydunit = ausp_itab-atwrt . "ASRHYDUNIT
when '0000000176' .
zsdtab1_itab-gearboxno = ausp_itab-atwrt . "GEARBOXNO
when '0000000177' .
zsdtab1_itab-battery = ausp_itab-atwrt . "BATTERY
when '0000000178' .
zsdtab1_itab-tyretype = ausp_itab-atwrt . "TYRETYPE
when '0000000179' .
zsdtab1_itab-tyremake = ausp_itab-atwrt . "TYREMAKE
when '0000000180' .
zsdtab1_itab-tyresize = ausp_itab-atwrt . "TYRESIZE
when '0000000181' .
zsdtab1_itab-rr_axle_no = ausp_itab-atwrt . "RR_AXLE_NO
when '0000000183' .
zsdtab1_itab-ff_axl_nor = ausp_itab-atwrt . "FF_AXLE_NO_rt
when '0000000182' .
zsdtab1_itab-ff_axl_nol = ausp_itab-atwrt . "FF_AXLE_NO_lt
when '0000000184' .
zsdtab1_itab-drivairbag = ausp_itab-atwrt . "DRIVAIRBAG
when '0000000185' .
zsdtab1_itab-st_box_no = ausp_itab-atwrt . "ST_BOX_NO
when '0000000186' .
zsdtab1_itab-transport = ausp_itab-atwrt . "TRANSPORT
when '0000000106' .
zsdtab1_itab-trackstage = ausp_itab-atwrt . " tracking stage
when '0000000111' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_1 = date . " tracking date for 1.
when '0000000112' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_5 = date . " tracking date for 5.
when '0000000113' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_10 = date . "tracking date for 10
when '0000000114' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_15 = date . "tracking date for 15
when '0000000115' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_20 = date . " tracking date for 20
when '0000000116' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_25 = date . " tracking date for 25
when '0000000117' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_30 = date . "tracking date for 30
when '0000000118' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_35 = date . "tracking date for 35
when '0000000119' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_40 = date . " tracking date for 40
when '0000000120' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_45 = date . " tracking date for 45
when '0000000121' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_50 = date . "tracking date for 50
when '0000000122' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_55 = date . "tracking date for 55
when '0000000123' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_60 = date . " tracking date for 60
when '0000000124' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_65 = date . " tracking date for 65
when '0000000125' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_70 = date . "tracking date for 70
when '0000000126' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_75 = date . "tracking date for 75
when '0000000127' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_78 = date . " tracking date for 78
when '0000000203' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_79 = date . " tracking date for 79
when '0000000128' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_80 = date . " tracking date for 80
when '0000000129' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_85 = date . "tracking date for 85
when '0000000130' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_90 = date . "tracking date for 90
when '0000000131' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_95 = date . "tracking date for 95
when '0000000132' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_100 = date . " tracking date for100
when '0000000133' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_110 = date . " tracking date for110
when '0000000134' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_115 = date . "tracking date for 115
when '0000000135' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_120 = date . "tracking date for 120
when '0000000136' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_105 = date . "tracking date for 105
when '0000000137' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_1 = date . "plan trk date for 1
when '0000000138' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_5 = date . "plan trk date for 5
when '0000000139' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_10 = date . "plan trk date for 10
when '0000000140' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_15 = date . "plan trk date for 15
when '0000000141' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_20 = date . "plan trk date for 20
when '0000000142' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_25 = date . "plan trk date for 25
when '0000000143' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_30 = date . "plan trk date for 30
when '0000000144' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_35 = date . "plan trk date for 35
when '0000000145' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_40 = date . "plan trk date for 40
when '0000000146' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_45 = date . "plan trk date for 45
when '0000000147' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_50 = date . "plan trk date for 50
when '0000000148' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_55 = date . "plan trk date for 55
when '0000000149' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_60 = date . "plan trk date for 60
when '0000000150' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_65 = date . "plan trk date for 65
when '0000000151' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_70 = date . "plan trk date for 70
when '0000000152' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_75 = date . "plan trk date for 75
when '0000000153' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_78 = date . "plan trk date for 78
when '0000000202' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_79 = date . "plan trk date for 79
when '0000000154' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_80 = date . "plan trk date for 80
when '0000000155' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_85 = date . "plan trk date for 85
when '0000000156' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_90 = date . "plan trk date for 90
when '0000000157' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_95 = date . "plan trk date for 95
when '0000000158' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_100 = date . "plan trk date for 100
when '0000000159' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_105 = date . "plan trk date for 105
when '0000000160' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_110 = date . "plan trk date for 110
when '0000000161' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_115 = date . "plan trk date for 115
when '0000000162' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_120 = date . "plan trk date for 120
********Additional fields / 24.05.98**********************************
when '0000000099' .
case ausp_itab-atwrt .
when '540' .
zsdtab1_itab-roll_blind = 'X' .
when '482' .
zsdtab1_itab-ground_clr = 'X' .
when '551' .
zsdtab1_itab-anti_theft = 'X' .
when '882' .
zsdtab1_itab-anti_tow = 'X' .
when '656' .
zsdtab1_itab-alloy_whel = 'X' .
when '265' .
zsdtab1_itab-del_class = 'X' .
when '280' .
zsdtab1_itab-str_wheel = 'X' .
when 'CDC' .
zsdtab1_itab-cd_changer = 'X' .
when '205' .
zsdtab1_itab-manual_eng = 'X' .
when '273' .
zsdtab1_itab-conn_handy = 'X' .
when '343' .
zsdtab1_itab-aircleaner = 'X' .
when '481' .
zsdtab1_itab-metal_sump = 'X' .
when '533' .
zsdtab1_itab-speaker = 'X' .
when '570' .
zsdtab1_itab-arm_rest = 'X' .
when '580' .
zsdtab1_itab-aircond = 'X' .
when '611' .
zsdtab1_itab-exit_light = 'X' .
when '613' .
zsdtab1_itab-headlamp = 'X' .
when '877' .
zsdtab1_itab-readlamp = 'X' .
when '808' .
zsdtab1_itab-code_ckd = 'X' .
when '708' .
zsdtab1_itab-del_prt_lc = 'X' .
when '593' .
zsdtab1_itab-ins_glass = 'X' .
when '955' .
zsdtab1_itab-zelcl = 'Elegance' .
when '593' .
zsdtab1_itab-zelcl = 'Classic' .
endcase .
endcase .
endloop .
*--Update the sales data .--
perform get_sales_order using mara_itab-matnr .
perform get_cartype using mara_itab-matnr .
append zsdtab1_itab .
endloop.
<<<
loop at zsdtab1_itab .
if zsdtab1_itab-cartype <> 'W-203'
or zsdtab1_itab-cartype <> 'W-210'
or zsdtab1_itab-cartype <> 'W-211'.
clear zsdtab1_itab-zelcl.
endif.
SELECT SINGLE * FROM ZSDTAB1 WHERE COMMNO = MARA_ITAB-MATNR .
select single * from zsdtab1 where commno = zsdtab1_itab-commno.
if sy-subrc <> 0 .
insert into zsdtab1 values zsdtab1_itab .
else .
update zsdtab1 set :vbeln = zsdtab1_itab-vbeln
bill_doc = zsdtab1_itab-bill_doc
dest = zsdtab1_itab-dest
lgort = zsdtab1_itab-lgort
ship_tp = zsdtab1_itab-ship_tp
country = zsdtab1_itab-country
kunnr = zsdtab1_itab-kunnr
vkbur = zsdtab1_itab-vkbur
customer = zsdtab1_itab-customer
city = zsdtab1_itab-city
region = zsdtab1_itab-region
model = zsdtab1_itab-model
drive = zsdtab1_itab-drive
converter = zsdtab1_itab-converter
transmssn = zsdtab1_itab-transmssn
colour = zsdtab1_itab-colour
ztrim = zsdtab1_itab-ztrim
commno = zsdtab1_itab-commno
trackstage = zsdtab1_itab-trackstage
chassis_no = zsdtab1_itab-chassis_no
engine_no = zsdtab1_itab-engine_no
body_no = zsdtab1_itab-body_no
cockpit = zsdtab1_itab-cockpit
airbag = zsdtab1_itab-airbag
trailer_no = zsdtab1_itab-trailer_no
fininspdat = zsdtab1_itab-fininspdat
entrydate = zsdtab1_itab-entrydate
regist_no = zsdtab1_itab-regist_no
mech_key = zsdtab1_itab-mech_key
side_ab_rt = zsdtab1_itab-side_ab_rt
side_ab_lt = zsdtab1_itab-side_ab_lt
elect_key = zsdtab1_itab-elect_key
head_lamp = zsdtab1_itab-head_lamp
tail_lamp = zsdtab1_itab-tail_lamp
vac_pump = zsdtab1_itab-vac_pump
sd_ab_sn_l = zsdtab1_itab-sd_ab_sn_l
sd_ab_sn_r = zsdtab1_itab-sd_ab_sn_r
asrhydunit = zsdtab1_itab-asrhydunit
gearboxno = zsdtab1_itab-gearboxno
battery = zsdtab1_itab-battery
tyretype = zsdtab1_itab-tyretype
tyremake = zsdtab1_itab-tyremake
tyresize = zsdtab1_itab-tyresize
rr_axle_no = zsdtab1_itab-rr_axle_no
ff_axl_nor = zsdtab1_itab-ff_axl_nor
ff_axl_nol = zsdtab1_itab-ff_axl_nol
drivairbag = zsdtab1_itab-drivairbag
st_box_no = zsdtab1_itab-st_box_no
transport = zsdtab1_itab-transport
OPTIONS-
roll_blind = zsdtab1_itab-roll_blind
ground_clr = zsdtab1_itab-ground_clr
anti_theft = zsdtab1_itab-anti_theft
anti_tow = zsdtab1_itab-anti_tow
alloy_whel = zsdtab1_itab-alloy_whel
del_class = zsdtab1_itab-del_class
str_wheel = zsdtab1_itab-str_wheel
cd_changer = zsdtab1_itab-cd_changer
manual_eng = zsdtab1_itab-manual_eng
conn_handy = zsdtab1_itab-conn_handy
aircleaner = zsdtab1_itab-aircleaner
metal_sump = zsdtab1_itab-metal_sump
speaker = zsdtab1_itab-speaker
arm_rest = zsdtab1_itab-arm_rest
aircond = zsdtab1_itab-aircond
exit_light = zsdtab1_itab-exit_light
headlamp = zsdtab1_itab-headlamp
readlamp = zsdtab1_itab-readlamp
code_ckd = zsdtab1_itab-code_ckd
del_prt_lc = zsdtab1_itab-del_prt_lc
ins_glass = zsdtab1_itab-ins_glass
dat_trk_1 = zsdtab1_itab-dat_trk_1
dat_trk_5 = zsdtab1_itab-dat_trk_5
dat_trk_10 = zsdtab1_itab-dat_trk_10
dat_trk_15 = zsdtab1_itab-dat_trk_15
dat_trk_20 = zsdtab1_itab-dat_trk_20
dat_trk_25 = zsdtab1_itab-dat_trk_25
dat_trk_30 = zsdtab1_itab-dat_trk_30
dat_trk_35 = zsdtab1_itab-dat_trk_35
dat_trk_40 = zsdtab1_itab-dat_trk_40
dat_trk_45 = zsdtab1_itab-dat_trk_45
dat_trk_50 = zsdtab1_itab-dat_trk_50
dat_trk_55 = zsdtab1_itab-dat_trk_55
dat_trk_60 = zsdtab1_itab-dat_trk_60
dat_trk_65 = zsdtab1_itab-dat_trk_65
dat_trk_70 = zsdtab1_itab-dat_trk_70
dat_trk_75 = zsdtab1_itab-dat_trk_75
dat_trk_78 = zsdtab1_itab-dat_trk_78
dat_trk_79 = zsdtab1_itab-dat_trk_79
dat_trk_80 = zsdtab1_itab-dat_trk_80
dat_trk_85 = zsdtab1_itab-dat_trk_85
dat_trk_90 = zsdtab1_itab-dat_trk_90
dat_trk_95 = zsdtab1_itab-dat_trk_95
dattrk_100 = zsdtab1_itab-dattrk_100
dattrk_105 = zsdtab1_itab-dattrk_105
dattrk_110 = zsdtab1_itab-dattrk_110
dattrk_115 = zsdtab1_itab-dattrk_115
dattrk_120 = zsdtab1_itab-dattrk_120
pdt_tk_1 = zsdtab1_itab-pdt_tk_1
pdt_tk_5 = zsdtab1_itab-pdt_tk_5
pdt_tk_10 = zsdtab1_itab-pdt_tk_10
pdt_tk_15 = zsdtab1_itab-pdt_tk_15
pdt_tk_20 = zsdtab1_itab-pdt_tk_20
pdt_tk_25 = zsdtab1_itab-pdt_tk_25
pdt_tk_30 = zsdtab1_itab-pdt_tk_30
pdt_tk_35 = zsdtab1_itab-pdt_tk_35
pdt_tk_40 = zsdtab1_itab-pdt_tk_40
pdt_tk_45 = zsdtab1_itab-pdt_tk_45
pdt_tk_50 = zsdtab1_itab-pdt_tk_50
pdt_tk_55 = zsdtab1_itab-pdt_tk_55
pdt_tk_60 = zsdtab1_itab-pdt_tk_60
pdt_tk_65 = zsdtab1_itab-pdt_tk_65
pdt_tk_70 = zsdtab1_itab-pdt_tk_70
pdt_tk_75 = zsdtab1_itab-pdt_tk_75
pdt_tk_78 = zsdtab1_itab-pdt_tk_78
pdt_tk_79 = zsdtab1_itab-pdt_tk_79
pdt_tk_80 = zsdtab1_itab-pdt_tk_80
pdt_tk_85 = zsdtab1_itab-pdt_tk_85
pdt_tk_90 = zsdtab1_itab-pdt_tk_90
pdt_tk_95 = zsdtab1_itab-pdt_tk_95
pdt_tk_100 = zsdtab1_itab-pdt_tk_100
pdt_tk_105 = zsdtab1_itab-pdt_tk_105
pdt_tk_110 = zsdtab1_itab-pdt_tk_110
pdt_tk_115 = zsdtab1_itab-pdt_tk_115
pdt_tk_120 = zsdtab1_itab-pdt_tk_120
cartype = zsdtab1_itab-cartype
zelcl = zsdtab1_itab-zelcl
excise_no = zsdtab1_itab-excise_no
where commno = zsdtab1_itab-commno .
Update table .---------<<<
endif .
endloop .
perform update_excise_date .
perform update_post_goods_issue_date .
perform update_time.
*///////////////////// end of programe /////////////////////////////////
Get sales data -
form get_sales_order using matnr .
data : corr_vbeln like vbrk-vbeln .
ADDED BY ADITYA / 22.06.98 **************************************
perform get_order using matnr .
select single vbeln lgort into (zsdtab1_itab-vbeln , zsdtab1_itab-lgort)
from vbap where matnr = matnr . " C-22.06.98
from vbap where vbeln = zsdtab1_itab-vbeln .
if sy-subrc = 0 .
************Get the Excise No from Allocation Field*******************
select single * from zsdtab1 where commno = matnr .
if zsdtab1-excise_no = '' .
select * from vbrp where matnr = matnr .
select single vbeln into corr_vbeln from vbrk where
vbeln = vbrp-vbeln and vbtyp = 'M'.
if sy-subrc eq 0.
select single * from vbrk where vbtyp = 'N'
and sfakn = corr_vbeln. "cancelled doc.
if sy-subrc ne 0.
select single * from vbrk where vbeln = corr_vbeln.
if sy-subrc eq 0.
data : year(4) .
move sy-datum+0(4) to year .
select single * from bkpf where awtyp = 'VBRK' and awkey = vbrk-vbeln
and bukrs = 'MBIL' and gjahr = year .
if sy-subrc = 0 .
select single * from bseg where bukrs = 'MBIL' and belnr = bkpf-belnr
and gjahr = year and koart = 'D' and
shkzg = 'S' .
zsdtab1_itab-excise_no = bseg-zuonr .
endif .
endif.
endif.
endif.
endselect.
endif .
select single kunnr vkbur into (zsdtab1_itab-kunnr ,
zsdtab1_itab-vkbur) from vbak
where vbeln = zsdtab1_itab-vbeln .
if sy-subrc = 0 .
select single name1 ort01 regio into (zsdtab1_itab-customer ,
zsdtab1_itab-city , zsdtab1_itab-region) from kna1
where kunnr = zsdtab1_itab-kunnr .
endif.
Get Ship to Party **************************************************
select single * from vbpa where vbeln = zsdtab1_itab-vbeln and
parvw = 'WE' .
if sy-subrc = 0 .
zsdtab1_itab-ship_tp = vbpa-kunnr .
Get Destination Country of Ship to Party .************
select single * from kna1 where kunnr = vbpa-kunnr .
if sy-subrc = 0 .
select single * from t005t where land1 = kna1-land1
and spras = 'E' .
if sy-subrc = 0 .
zsdtab1_itab-country = t005t-landx .
endif .
endif .
endif .
endif .
endform. " GET_SALES
form update_time.
update zstatus set zupddate = sy-datum
uzeit = sy-uzeit
where programm = 'ZSDDET01' .
endform. " UPDATE_TIME
*& Form DATE_CONVERT
form date_convert using atflv changing date .
data : dt(8) , dat type i .
dat = atflv .
dt = dat .
date = dt .
endform. " DATE_CONVERT
*& Form UPDATE_POST_GOODS_ISSUE_DATE
form update_post_goods_issue_date .
types : begin of itab1_type ,
mblnr like mseg-mblnr ,
budat like mkpf-budat ,
end of itab1_type .
data : itab1 type itab1_type occurs 10 with header line .
loop at mara_itab .
select single * from zsdtab1 where commno = mara_itab-matnr .
if sy-subrc = 0 and zsdtab1-postdate = '00000000' .
refresh itab1 . clear itab1 .
select * from mseg where matnr = mara_itab-matnr and bwart = '601' .
itab1-mblnr = mseg-mblnr .
append itab1 .
endselect .
loop at itab1 .
select single * from mkpf where mblnr = itab1-mblnr .
if sy-subrc = 0 .
itab1-budat = mkpf-budat .
modify itab1 .
endif .
endloop .
sort itab1 by budat .
read table itab1 index 1 .
if sy-subrc = 0 .
update zsdtab1 set postdate = itab1-budat
where commno = mara_itab-matnr .
endif .
endif .
endloop .
endform. " UPDATE_POST_GOODS_ISSUE_DATE
*& Form UPDATE_EXCISE_DATE
form update_excise_date.
types : begin of itab2_type ,
mblnr like mseg-mblnr ,
budat like mkpf-budat ,
end of itab2_type .
data : itab2 type itab2_type occurs 10 with header line .
loop at mara_itab .
select single * from zsdtab1 where commno = mara_itab-matnr .
if sy-subrc = 0 and zsdtab1-excise_dat = '00000000' .
refresh itab2 . clear itab2 .
select * from mseg where matnr = mara_itab-matnr and
( bwart = '601' or bwart = '311' ) .
itab2-mblnr = mseg-mblnr .
append itab2 .
endselect .
loop at itab2 .
select single * from mkpf where mblnr = itab2-mblnr .
if sy-subrc = 0 .
itab2-budat = mkpf-budat .
modify itab2 .
endif .
endloop .
sort itab2 by budat .
read table itab2 index 1 .
if sy-subrc = 0 .
update zsdtab1 set excise_dat = itab2-budat
where commno = mara_itab-matnr .
endif .
endif .
endloop .
endform. " UPDATE_EXCISE_DATE
form get_order using matnr .
types : begin of itab_type ,
vbeln like vbap-vbeln ,
posnr like vbap-posnr ,
end of itab_type .
data : itab type itab_type occurs 10 with header line .
refresh itab . clear itab .
select * from vbap where matnr = mara_itab-matnr .
itab-vbeln = vbap-vbeln .
itab-posnr = vbap-posnr .
append itab .
endselect .
loop at itab .
select single * from vbak where vbeln = itab-vbeln .
if vbak-vbtyp <> 'C' .
delete itab .
endif .
endloop .
loop at itab .
select single * from vbfa where vbelv = itab-vbeln and
posnv = itab-posnr and vbtyp_n = 'H' .
if sy-subrc = 0 .
delete itab .
endif .
endloop .
clear : zsdtab1_itab-vbeln , zsdtab1_itab-bill_doc .
loop at itab .
zsdtab1_itab-vbeln = itab-vbeln .
select single * from vbfa where vbelv = itab-vbeln and
posnv = itab-posnr and vbtyp_n = 'M' .
if sy-subrc = 0 .
zsdtab1_itab-bill_doc = vbfa-vbeln .
endif .
endloop .
endform .
*& Form GET_CARTYPE
form get_cartype using matnr .
select single * from mara where matnr = matnr .
zsdtab1_itab-cartype = mara-satnr .
endform. " GET_CARTYPEHi,
I have analysed your program and i would like to share following points for better performance of this report :
(a) Use the field Names instead of Select * or Select Single * as if you use the field names it will consume less amount of resources inside the loop as well as you have lot many Select Single * and u r using very big tables like VBAP and many more.
(b) Trace on ST05 which particular query is mostly effecting your system or use ST12 in current mode to trace for less inputs which run the report for 20-30 min so that we get an idea which queries are effecting the system and taking a lot of time.
(c) In Case of internal tables sort the data properly and use binary search for getting the data.
I think this will help.
Thanks and Regards,
Harsh -
Report painter performance problem...
I have a client which runs a report group consists of 14 reports... When we run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
This is what I've done so far
(this is a SAP generated program)...
1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
2. I've created secondary indexes to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
4. I've read an article about report group performance that it is a bug.
(sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
Anyone have the same problem as mine?
Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
Edited by: christopher mancuyas on Sep 9, 2008 5:39 AMReport painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
Amit. -
Performance problems with DFSN, ABE and SMB
Hello,
We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
We have only one AD-site and don't use DFS-Replication.
Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
Network is a LAN.
Our DFSN looks like this:
\\contoso.com\home
Contains 10.000 Links
Drive mapping on clients to subfolder \\contoso.com\home\username
\\contoso.com\group
Contains 2500 Links
Drive mapping on clients directly to \\contoso.com\group
On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
We have to use ABE, so that users see only accessible Links (folders)
We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
After six weeks of researching and analyzing we were able to identify the exact problem.
Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
This is done a few times a day.
There is no possibility to create the folder and set the permissions in one step.
DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
This was identified by an network trace on our DFSN-servers and different clients.
Due to ABE the servers have to compute the folder listing for each request.
DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
Problem also occurs with Windows Server 2012 R2 as DFSN-server.
There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
Is there a possibility to disable the SMB change notification on server side ?
TIA and regards,
Ralf GaudesHi,
Thanks for posting in Microsoft Technet Forums.
I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
Thank you for your understanding and support.
Regards.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
How to Run a Custom Transaction in Back Ground after give the Input?
Hi,
I have problem that how to execute a Custom Transaction in Back Ground after end user will save a variant for input. In my Transaction there is no menu to save a Variant & to Execute in Back Ground.
Please suggest me how to do this in my Custom Transaction.Hi Ramana,
what kind of report you want to execute? the report is Executable Report or Modulepool prog. ??
if it is a executable program so when u execute (F8) from SE38 when u find the selection-screen in the menu bar ->Program ->Execute in Background( F9) option exits there u can schedule your report in Background or else use toce SM36 ( Define Background Job).
Why you want to run it in background with transaction code if you have this options?
Regards,
Sunil kairam.
Maybe you are looking for
-
Error when downloading new version and now it can't be found?
Please help me with this issue, i have had this problem for some time, but now it has gotten worse. What i mean is i used to still be able to get on iTunes by contecting my ipod to the computer, but now that will not even work. I tryed re-intalling a
-
Having Trouble downloading the AS2LR where can I find it?
the following link http://help.adobe.com/en_US/AS2LCR/Flash_10.0/help.html?content=Part2_AS2_LangRef_1.html fails to load any idea where I can find the as2 Language Reference? I know, I know stop using AS2 Legacy work need to update some old files
-
Is there a way to hide and unhide specific files programmatically using Labview?
I writing an application that produces 3 binary data files and I want the user to only see 1 of the 3 files. I'm using Labview 6.1 and windows 2000.
-
Lookout Upgrade Dev/Server-Client Licensing
Hello!, WE recently upgraded from Lookout 4.5.1 Enterprise Dev./Server Unlimited I/O License and Unlimited Client License to Lookout 6.0.2 Dev/Server Unlimited I/O License- one client license. Are we not entitled to an unlimited client license as an
-
Im having problems installing itunes in my windows 7. popup stating that there was a problem configuring during instalation