AFRU table data - taking long time to access
Hi,
In my report i am accessing the afru table, but it takes a lot of time. what should i do ? how can i archive the data from this table ?
Regards,
Hi Khushi,
Your query is simple but quite bad in terms of performance.
Change it to something like:
SELECT [...]
FROM afko
INNER JOIN afvc ON afko~aufpl = afvc~aufpl
INNER JOIN afru ON afvc~rueck = afru~rueck
WHERE afko~aufnr = [...]
For more information you can check OSS note 187906 Performance: Customer developments in PP and PM.
Similar Messages
-
CDHDR table query taking long time
Hi all,
Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
any suggestion to improve the performance.i want to select all the article which got changed on current date
regards
shibuThis will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
Greetings
Thomas -
Table valueset taking long time to open the LOV
Hi,
We added a table valueset to a concurrent program. The table vaueset showsTransaction number from ra_interface_lines_all table. It is having long list. So we added the partial string entering message before open a long list.But still it is taking long time.
Please any help on this highly appreciated.
Thanks,
SambaHi
Try to modify the query or creating an index will speed up the process.
Thanks & regards
Rajan -
Master data taking Long time in Query Execution
hello Experts
I have an issue while executing a query.
The input parameter for the query is the 0customer variable in which when I try to select the value from the tab Select From the List & goes for single value option or any other option then goes for long time loading the values & then come out with a short dump.
I want to know why this is happening.
Please help me out from this.
Thanks in advance
NehaThanks to All
I have checked the info Object - 0Customer.
The following settings are there in Bex Explorer tab ::
Query Def. Filter Value Selection - Values in Master Data Table
Query Execution Filter Val. Selectn - Only Posted Values for navigation
Also the Value for display in the object are only 1000. This takes time only in analyzer not in designer.
What I have to do, plz suggest
Thanks
Neha -
EWA service data taking long time
Hi,
In our landscape for one system while generating EWA the Service data collection is taking more than 800 min. How can we reduce thisHello,
I understand, but I am sure the data colelction is done on the Satellite system with the EWA SDCCN TAsk and is populated
via the SM_<SID>CLNT<nnn>_BACK RFC. So have you checked this task log for any errors or time outs? Every EWA states how long the data collection task took. But you have to look at the data colelction task to get more information.
Regards,
Paul -
COEP table query taking longer time
Hi ABAP guru's,
I have a problem with the performance of a Select query on table COEP(91 million records in QA),
i want to tune the program for better performance.
Presenlty it took nearly 6 hrs to execute the program in Background, if in foreground givig dump with
message for maximum time exceed.
the code which exists in the program is
SELECT WOGBTR OBJNR KSTAR OWAER PERIO FROM COEP
INTO TABLE T_COEP
WHERE KOKRS = P_KOKRS AND
OBJNR IN R_COSTCENTER AND
KSTAR IN R_COSTELEMENT AND
PERIO LE P_PERIO AND
GJAHR EQ G_YEAR.
I am in a support project and i need to fix this issue ASAP plz help me out in tuning the program.
I have seen some other posts in the forum for similar issues, the outcome of that is to use LEDNR = '00'.
I am not sure if this works for me or not.
I cant take chance of trail and error , as it has to move to QA untill i know the status of the change and it takes a minimum of week time.
Regards
Sunil kumarHi Experts,
I got a similiar issue, though I follow code specified above still the performance is slow. Please suggest what need to be done.
select dintinct objnr
uspob
INTO TABLE lt_srcc
FROM coep
WHERE kokrs EQ 'ESAM'
AND perio EQ p_period
AND lednr = gc_00
* AND objnr LIKE 'KSESAM%'
AND gjahr EQ p_fisyr
AND kstar LIKE '0000901%'
AND vrgng EQ 'RKIU'
AND uspob LIKE 'KSESAM%'
%_HINTS ORACLE 'INDEX("COEP" "COEP~1")'.
Thanks
Chandramouli -
Updating the table is taking long time
I have one table Solution,
in that in one java application P,
1) i am inserting a new record by passing values to a oracle procedure A.
2) And in the same application i am updating the same record by calling one oracle function B.
Now in another java application Q,
3) I am trying to update the same record using hibernate update, but i could not find the record which i updated initially.
I am executing P first and then executing Q.
Here insert and first update is happening (I am able to see the record with the first update after sometime), but 2nd update is not happening.
If i wait for sometime in Q application, i am able to update the same record.
Can anybody please help here, why it is happening?Can anybody please help here, why it is happening?Unless & until application P issues a COMMIT, application Q will NOT see the DML changes issued by P
-
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Process Chain taking long time in loading data in infocube
Dear Expert,
We are loading data thru PC in AR cube it takes data frm
PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
In Index creation everyday its taking long time around 9 to 10 hrs to create it
when we go in RSRV and repair the infocube thr loading of data happens fast. We are doing it(RSRV) everyday. In DB02 we have seen dat 96% tablespace is used.
Please tell permanent solution.
Please suggest its BI Issue or Basis.
Regards,
AnkitHi ,
We are loading data thru PC in AR cube it takes data frm
PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
In the above steps insted of Create Aggregates it should be Roll up Process of aggregates.
You can ask the basis team to check the Table space in the transaction db02old/db02.
Check if there is long running job in SM66/SM50 kill that job.
check there should be enough Batch process to perform the steps.
Hope this helps.
"Assigning points is the ways to say thanks on SDN".
Br
Alok -
my select query(2m records) coming within a second but while creating a table (nologging) based on the select clause it is taking long time.
can anybody give me the suggestion which part i will look to improve the performance..Plan
SELECT STATEMENT ALL_ROWS Cost: 11 Bytes: 655 Cardinality: 1
19 FILTER
18 NESTED LOOPS Cost: 11 Bytes: 655 Cardinality: 1
15 NESTED LOOPS Cost: 9 Bytes: 617 Cardinality: 1
12 NESTED LOOPS Cost: 8 Bytes: 481 Cardinality: 1
9 NESTED LOOPS Cost: 6 Bytes: 435 Cardinality: 1
6 NESTED LOOPS Cost: 4 Bytes: 209 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID TABLE OYSTER_WEB3.TRANSACTION Cost: 2 Bytes: 155 Cardinality: 1
2 BITMAP CONVERSION TO ROWIDS
1 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) OYSTER_WEB3.IX_LINE_COMMODITY_ID
5 TABLE ACCESS BY INDEX ROWID TABLE OYSTERPLUS_DATA.BRIO_SUPPLIERS Cost: 2 Bytes: 54 Cardinality: 1
4 INDEX UNIQUE SCAN INDEX (UNIQUE) OYSTERPLUS_DATA.PK_BRIO_SUPPLIERS Cost: 1 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID TABLE OYSTER3.FLAT_SITE_MV Cost: 2 Bytes: 226 Cardinality: 1
7 INDEX UNIQUE SCAN INDEX (UNIQUE) OYSTER3.PK_FLAT_SITE_MV Cost: 1 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID TABLE OYSTER3.SITE_COMMODITY_CODING Cost: 2 Bytes: 46 Cardinality: 1
10 INDEX UNIQUE SCAN INDEX (UNIQUE) OYSTER3.PK_SITE_COMMODITY_CODING Cost: 1 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID TABLE OYSTERPLUS_DATA.BRIO_COMMODITIES Cost: 1 Bytes: 136 Cardinality: 1
13 INDEX UNIQUE SCAN INDEX (UNIQUE) OYSTERPLUS_DATA.PK_BRIO_COMMODITIES Cost: 0 Cardinality: 1
17 TABLE ACCESS BY INDEX ROWID TABLE OYSTER3.SUPPLIER_ALIAS Cost: 2 Bytes: 38 Cardinality: 1
16 INDEX UNIQUE SCAN INDEX (UNIQUE) OYSTER3.PK_SUPPLIER_ALIAS Cost: 1 Cardinality: 1 -
Program SAPLSBAL_DB taking long time for BALHDR table entries
Hi Guys,
I am running a Z program in Quality and Production system both which is uploading data from Desktop.
In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
Can anybody provide me any suggestion.
Regards,
Shyamal.These are QA screen shots where no issue, but we are getting very long time in CRP.
Regards,
Shyamal -
Impdp taking long time for only few MBs data...
Hi All,
I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
Thanks...Also Read
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1] -
I am extracting the data from ECC To bw .but Data Loading taking long tim
Hi All,
i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last 6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
Thanks ,
chanduHi,
There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''. Also give '*' to user name.
Job name: REQ_XXXXXX
User Name: *
Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
Let me know if you still have questions.
Assigning points is the only way of saying thanks in SDN.
Thanks,
Kumar. -
SSRS Reports taking long time to load
Hello,
Problem : SSRS Reports taking long time to load
My System environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
Production Environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
I have created a Parameterized report (6 parameters), it will fetch data from 1 table. table has 1 year and 6 months data, I am selecting parameters for only 1 month (about 2500 records). It is taking almost 2 minutes and 30 seconds
to load the report.
This report running efficiently in my system (report load takes only 5 to 6 seconds) but in
production it is taking 2 minutes 30 seconds.
I have checked the Execution log from production so I found the timing for
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
10 second 15 sec
2 mins and 5 sec.
But Confusing point is that , if I run the same report at different time overall output time is same (approx) 2 min 30 sec but
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
more than 1 min 15 sec
more than 1 min
so 1 question why timings are different ?
My doubts are
1) If query(procedure to retrieve the data) is the problem then it should take more time always,
2) If Report structure is problem then rendering will also take same time (long time)
for this (2nd point) I checked on blog that Rendering depends on environment structure e.g. Network bandwidth, RAM, CPU Usage , Number of users accessing same report at a time.
So I did testing of report when no other user working on any report But failed (same result output is 2 min 30 sec)
From network team I got the result is that there is no issue or overload in CPU usage or RAM also No issue in Network bandwidth.
Production Database Server and Report server are different (but in same network).
I checked that database server the SQL Server is using almost Full RAM (23 GB out of 24 GB)
I tried to allocate the memory to less amount up to 2GB (Trial solution I got from Blogs) but this on also failed.
one hint I got from colleague that , change the allocated memory setting from static memory to dynamic to SQL Server
(I guess above point is the same) I could not find that option Static and Dynamic memory setting.
I did below steps
Connected to SQL Server Instance
Right click on Instance go to properties, Go to Memory Tab
I found three options 1) Server Memory 2) Other memory 3) Section for "Configured values and Running values"
Then I tried to reduce Maximum Server memory up to 2 GB (As mentioned above)
All trials failed, this issue I could not find the roots for this issue.
Can anyone please help (it's bit urgent).Hi UdayKGR,
According to your description, your report takes too long to load on your production environment. Right?
In this scenario, since the report runs quickly in developing environment, we initially think it supposed to be the issue on data retrieval. However, based on the information in execution log, it takes longest time on rendering part. So we suggest you optimize
the report itself to reduce the time for rendering. Please refer to the link below:
My report takes too long to render
Here is another article about overall performance optimization for Reporting Services:
Reporting Services Performance and Optimization
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
Update ztable is taking long time
Hi All,
i have run the 5 jobs with the same program at a time but when we check the db trace
zs01 is taking long time as shown below.here zs01 is having small amount of data.
in the below dbtrace for updating zs01 is taking 2,315,485 seconds .how to reduce this?
HH:MM:SS.MS Duration Program ObjectName Op. Curs Array Rec RC Conn
2:36:15 AM 2,315,485 SAPLZS01 ZS01 FETCH 294 1 1 0 R/3
The code as shown below
you can check the code in the program SAPLZS01 include LZS01F01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO ZS01-ERR_AMT.
ADD 1 TO ZS01-ERR_INVOI.
UPDATE ZS01.
ENDIF.
ENDIF.
my question is when updating the ztable why it is taking such long time,
how to reduce the time or how to make faster to update the ztable .
Thanks in advance,
regards
SuniTry the code like this..
data: wa_zs01 type zs01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
-- change
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO wa_ZS01-ERR_AMT.
ADD 1 TO wa_ZS01-ERR_INVOI.
UPDATE ZS01 from wa_zs01.
ENDIF.
ENDIF.
And i think this Select query for ZS01 is inside the ZS02 SELECT statement,
This might also make slow process.
If you want to make database access always use Workarea/Internal table to fetch the data
and work with that.
Accessing database like this or with Select.... endselect is an inefficient programming.
Maybe you are looking for
-
How can I move my iCloud account to another Apple ID?
I know it might be a long shot, but I have recently made a new iTunes/AppleID account, and want to move my iCloud account from my old account to the new one without it being tied to the old one in any way. Is there a way to do this?
-
[Wine] - How to deal with undetected ALSA Device problem
All hail Google and Archwiki sorry for that nonsense, I'm just too happy after two days trying to make my speaker work with Wine. For those who get same problem with me (Wine didn't detect your ALSA device correctly and/or didn't list it on winecfg),
-
Windows Server 2012 Terminal Services (Client Side)
I would like to see the interface of the new Windows Server 2012 Terminal Session via RDP. Reason being, is that users are resistant to change - and if there is no start button like in windows 8, that is a big concern when considering upgrading our
-
Dabase link while using copy command in Oracle 9i
Hi, Is it must to have a database link while copying data between two databases in Oracle 9i using COPY command? I am using the copy command like following: SQL> copy from xxx/xxx@xyz to abc/abc@stu insert Table using select * from Table Database lin
-
How do I remove a podcast episode
I can't figure out how to remove old episodes of podcasts from my shuffle. I've done all the syncing and resyncing I can think of and I've looked everywhere for a "delete" option. Please help if you can!!!! Thanks.