Why Garbage Collection take more time on JRockit?
My company use <br>
<b>BEA WebLogic 8.1.2<br>
JRockit version 1.4.2<br>
Window 2003 32bit<br>
RAM 4 Gig<br>
<br>
-Xms = 1300<br>
-Xmx = 1300<br></b>
and running ejb application.<br>
My problem is why JRockit take more time. How Can I solve this problem. Because my application will down again.
<br>
This is my infomation on JRockit :
<br>
Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep.
<br>
Total Garbage Collection Count: 10340
<br>
Last GC End: Wed May 10 13:55:37 ICT 2006
<br>
Last GC Start: Wed May 10 13:55:35 ICT 2006
<br>
<b>Total Garbage Collection Time: 2:53:13.1</b>
<br>
GC Handles Compaction: true
<br>
Concurrent: false
<br>
Generational: false
<br>
Incremental: false
<br>
Parallel: true
<br>
Hi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs and IDOCS.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
SM21 - System log can also be helpful.
Also RSA7 will show LUWS which means more than one record.
Thanks,
JituK
Similar Messages
-
Garbage Collection take long times.....
*** NOTES 19-JUN-2001 10:27:33 [20-JUN-2001 00:27:33 ASST3] websupport
*** Logged by contact: Weber Wang, 2-87705935 ext. 63
Product Name and Version, including Service Pack (if any):
=============================================
weblogic6.0sp2
Platform (OS Version)
=================
solaris2.7
JDK Version (if applicable)
=====================
jdk130
If your problem relates to a 3rd party product (JDBC driver, database, or web/proxy
server), please provide vendor name, product name, version number
==============================================================================================
Detailed Problem Description
======================
we have set java parameter to start up weblogic server ,but got the Garbage Collection
time too long...it cause browser halt...when Garbage Collection
will do a Full GC?and will it degrade weblogic's performance?
java -server -verbose:class -verbose:gc -verbose:jni -XX:newSize=512m -XX:MaxNewSize=512m
-Xms768m -Xmx768m -XX:SurvivorRatio=2
Error Message/Stack Trace
=====================
[Full GC 65065K->27729K(65216K), 1.2891364 secs]
[Full GC 49079K->28008K(65216K), 1.0507784 secs]
[Full GC 65111K->29714K(65216K), 1.1813595 secs]
[Full GC 51063K->29988K(65216K), 1.0506374 secs]
[Full GC 64951K->27867K(65216K), 1.1312621 secs]
[Full GC 49216K->28146K(65216K), 1.0534394 secs]
[Full GC 65095K->29993K(65216K), 1.6070138 secs]
[Full GC 51343K->30182K(65216K), 1.3617954 secs]I think your young generation size is waaaaay too high - its performance
impact becomes negative at or around 1/2 of the total heap size.
weber <[email protected]> wrote:
*** NOTES 19-JUN-2001 10:27:33 [20-JUN-2001 00:27:33 ASST3] websupport
*** Logged by contact: Weber Wang, 2-87705935 ext. 63
Product Name and Version, including Service Pack (if any):
=============================================
weblogic6.0sp2
Platform (OS Version)
=================
solaris2.7
JDK Version (if applicable)
=====================
jdk130
If your problem relates to a 3rd party product (JDBC driver, database, or web/proxy
server), please provide vendor name, product name, version number
==============================================================================================
Detailed Problem Description
======================
we have set java parameter to start up weblogic server ,but got the Garbage Collection
time too long...it cause browser halt...when Garbage Collection
will do a Full GC?and will it degrade weblogic's performance?
java -server -verbose:class -verbose:gc -verbose:jni -XX:newSize=512m -XX:MaxNewSize=512m
-Xms768m -Xmx768m -XX:SurvivorRatio=2
Error Message/Stack Trace
=====================
[Full GC 65065K->27729K(65216K), 1.2891364 secs]
[Full GC 49079K->28008K(65216K), 1.0507784 secs]
[Full GC 65111K->29714K(65216K), 1.1813595 secs]
[Full GC 51063K->29988K(65216K), 1.0506374 secs]
[Full GC 64951K->27867K(65216K), 1.1312621 secs]
[Full GC 49216K->28146K(65216K), 1.0534394 secs]
[Full GC 65095K->29993K(65216K), 1.6070138 secs]
[Full GC 51343K->30182K(65216K), 1.3617954 secs]--
Dimitri -
Why HTML report takes more time than the PDF one?
Hi,
I have created report in Reports 6i. When I run the report on the web with FORMAT = PDF it runs very fast and shows all the pages in 2 minutes. But when I run with
FORMAT = HTML it shows the first page in 2 minutes, after that it takes lot of time to show the remaining pages. If the total pages are more than 40, the browser just freezes
Can somebody give me the reason?
Is there any way to rectify this?
Thanks alot.
Ram.Hi Senthil,
Iam running with the below parameters.
Format : HTML
Destination : Screen.
My default browser is IE. When I try to run using Netscape it showed only 1 page out of 34 pages.
When I run Format as PDF it is faster but font size is small when it opens up. Offcourse user can zoom it.
If I increase the report width from 11 to 14 the font size becomes very small when it open up in browser.
Is there any way that I can set up zoom when I run as PDF?
Thanks for your help.
Ram. -
Why import of change request in production takes more time than quality?
Hello All,
why import of change request in production takes more time than import into quality?Hi jahangeer,
I believe it takes same time to import a request in both quality and production as they will be in sync.
Even then if it takes more time in production that may depend on the change request.
Thanks
Pavan -
Why view have no stored data ? And what is the reason view take more time
Why view have no stored data ? And what is the reason view take more time to query ?
what happen if a view have stored data?user12941450 wrote:
I want to know the reason that why querying view is slower then querying a normal table?..Untrue.
For example take a table with 2laks record and a view for that table.
If i make a query like( Select name,address from table) then it works fast then select(name,address)from view..Incorrectly interpreting the results.
A view is a SQL statement. Only difference is that the SQL statement is stored in the database's dictionary. Let's consider the following view:
create or replace view foo_view as select * from empWhen you use the view as follows:
select * from foo_viewOracle sees it as follows:
select * from (select * from emp)This is no slower, or no faster, than providing the following SQL to Oracle:
select * from empSo if you observe a difference in performance between using plain SQL versus using that same SQL via a view, there are other reasons for that difference in performance. The reason is NOT that views are slower. -
Oracle coherence first read/write operation take more time
I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.In which case, why bother using Coherence? You're not really gaining anything, are you?
What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
Just my two cents.
Cheers,
Steve
NB. I don't work for Oracle, so maybe they have a different opinion. :) -
Bind variable code takes more time to complete?
Hello, My database is oracle11g.
I have same plsql code and first one is without bind variable and second one is with bind variable. Usually, bind variable should take less time. But here
the bind variable takes more time than the regular code... Can any one please explain why?
SQL> alter system flush shared_pool;
System altered.
SQL> declare
2 cursor c1 is select * from emp where rownum < 50;
3 l_start NUMBER DEFAULT DBMS_UTILITY.GET_TIME;
4 v_cnt number;
5 begin
6 for i in c1 loop
7 SELECT count(*) into v_cnt
8 FROM rate
9 WHERE rate_id IN (SELECT rate_id
10 FROM ratedetail
11 WHERE benefit_id = i.benefit_id)
12 AND effective_date =
13 TO_DATE ('2011-01-23 00:00:00', 'yyyy-MM-dd HH24:MI:SS')
14 AND rate_type_id = 1;
15 end loop;
16 DBMS_OUTPUT.PUT_LINE('total minutes....'||ROUND(ROUND((DBMS_UTILITY.GET_TIME - l_start)/100, 2)
/60,3));
17 end;
18 /
total minutes.....06
PL/SQL procedure successfully completed.
SQL> alter system flush shared_pool;
System altered.
SQL>
SQL> declare
2 cursor c1 is select benefit_id from emp where rownum < 50;
3 l_start NUMBER DEFAULT DBMS_UTILITY.GET_TIME;
4 v_cnt number;
5 begin
6 for i in c1 loop
7 execute immediate 'SELECT count(*)
8 FROM rate
9 WHERE rate_id IN (SELECT rate_id
10 FROM ratedetail
11 WHERE benefit_id = :x)
12 AND effective_date = trunc(sysdate)-202
13 AND rate_type_id = 1'
14 into v_cnt using i.benefit_id;
15 end loop;
16 DBMS_OUTPUT.PUT_LINE('total minutes....'||ROUND(ROUND((DBMS_UTILITY.GET_TIME - l_start)/100, 2)
/60,3));
17 end;
18 /
total minutes.....061
PL/SQL procedure successfully completed.
SQL>Shrinika wrote:
Thanks for the clarification.. Now i understand...
One final question on this thread before i close this thread....
My database is set to CURSOR_SHARING=FORCE for some reason. It seems somebody applied a "quick and dirty fix" to "database is slow" problem. BAD PRACTICE
My question is, when we use bind variable, does it parse the sql code every time? or does it reuse the execution plan?
In my database, it reuse the execution plan... Just checking... When we set CURSOR_SHARING=FORCE, it should generate the execution plan
for every unqiue sql code... Is that correct? Am i confusing?If by "parse" you mean a "hard parse" (which generates execution plan), then the answer is NO. As you observed, it reuses execution plan.
For e.g. with CURSOR_SHARING=FORCE setting, following SQLs
select employee_no, first_name, last_name from employees where dept_no = 10 ;and
select employee_no, first_name, last_name from employees where dept_no = 20 ;would tend to reuse the same execution plan since both of these will be rewritten by oracle (before execution) as
select employee_no, first_name, last_name from employees where dept_no = :SYS01 ;Hope this helps.
Edited by: user503699 on Aug 14, 2010 3:55 AM -
Import SCA files in Development Tab of the Transport Studio take more time
Hi,
After Check-In files in the Transport Studio, the import of SCA files starts in the development Tab of the Transport Studio.
The import takes more time. Why this happens?
Am I missing any configuration? Please specify in detail.
Thanks in Advance,
SathyaSC: sap.com_SAP-JEE:
SDM-deploy
Returncode : Not executed.
How to check the username, password and url for SDM?
Log file of Repository-import:
Info:Starting Step Repository-import at 2009-10-13 22:15:49.0484 +5:00
Info:Component:sap.com/SAP_JTECHS
Info:Version :SAP AG.20060119105400
Info:3. PR is of type TCSSoftwareComponent
Info:Component:sap.com/SAP_BUILDT
Info:Version :SAP AG.20060411165600
Info:2. PR is of type TCSSoftwareComponent
Info:Component:sap.com/SAP-JEE
Info:Version :SAP AG.20060119105300
Info:1. PR is of type TCSSoftwareComponent
Info:Step Repository-import ended with result 'not needed' at 2009-10-13 22:15:49.0500 +5:00
Log File of CBS-make :
Import got failed.
Info:build process already running: waiting for another period of 30000 ms
Info:no changes on the CBS request queue (DM0_DEMObp1_D) after a waiting time of 14430000 ms
Fatal:The request queue is not processed by the CBS during the given time intervall => TCS cannot import the request because queue is not empty
Fatal:There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
Fatal Exception:com.sap.cms.tcs.interfaces.exceptions.TCSCommunicationException: communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.:communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
com.sap.cms.tcs.interfaces.exceptions.TCSCommunicationException: communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
at com.sap.cms.tcs.client.CBSCommunicator.importRequest(CBSCommunicator.java:369)
at com.sap.cms.tcs.core.CbsMakeTask.processMake(CbsMakeTask.java:120)
at com.sap.cms.tcs.core.CbsMakeTask.process(CbsMakeTask.java:347)
at com.sap.cms.tcs.process.ProcessStep.processStep(ProcessStep.java:77)
at com.sap.cms.tcs.process.ProcessStarter.process(ProcessStarter.java:179)
at com.sap.cms.tcs.core.TCSManager.importPropagationRequests(TCSManager.java:376)
at com.sap.cms.pcs.transport.importazione.ImportManager.importazione(ImportManager.java:216)
at com.sap.cms.pcs.transport.importazione.ImportQueueHandler.execImport(ImportQueueHandler.java:585)
at com.sap.cms.pcs.transport.importazione.ImportQueueHandler.startImport(ImportQueueHandler.java:101)
at com.sap.cms.pcs.transport.proxy.CmsTransportProxyBean.startImport(CmsTransportProxyBean.java:583)
at com.sap.cms.pcs.transport.proxy.CmsTransportProxyBean.startImport(CmsTransportProxyBean.java:559)
at com.sap.cms.pcs.transport.proxy.LocalCmsTransportProxyLocalObjectImpl0.startImport(LocalCmsTransportProxyLocalObjectImpl0.java:1736)
at com.sap.cms.ui.wl.Custom1.importQueue(Custom1.java:1169)
at com.sap.cms.ui.wl.wdp.InternalCustom1.importQueue(InternalCustom1.java:2162)
at com.sap.cms.ui.wl.Worklist.onActionImportQueue(Worklist.java:880)
at com.sap.cms.ui.wl.wdp.InternalWorklist.wdInvokeEventHandler(InternalWorklist.java:2338)
at com.sap.tc.webdynpro.progmodel.generation.DelegatingView.invokeEventHandler(DelegatingView.java:87)
at com.sap.tc.webdynpro.progmodel.controller.Action.fire(Action.java:67)
at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.doHandleActionEvent(WindowPhaseModel.java:422)
at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.processRequest(WindowPhaseModel.java:133)
at com.sap.tc.webdynpro.clientserver.window.WebDynproWindow.processRequest(WebDynproWindow.java:344)
at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:143)
at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:298)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingStandalone(ClientSession.java:705)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:659)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:227)
at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:150)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:56)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doPost(DispatcherServlet.java:47)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
at java.security.AccessController.doPrivileged(Native Method)
at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
Info:Step CBS-make ended with result 'fatal error' ,stopping execution at 2009-10-14 02:16:28.0296 +5:00 -
Hi all
I want to fetch just twenty thousands records from table. My query take more time to fetch twenty thousands records. I post my working query, Could you correct the query for me. thanks in advance.
Query
select
b.Concatenated_account Account,
b.Account_description description,
SUM(case when(Bl.ACTUAL_FLAG='B') then
((NVL(Bl.PERIOD_NET_DR, 0)- NVL(Bl.PERIOD_NET_CR, 0)) + (NVL(Bl.PROJECT_TO_DATE_DR, 0)- NVL(Bl.PROJECT_TO_DATE_CR, 0)))end) "Budget_2011"
from
gl_balances Bl,
gl_code_combinations GCC,
psb_ws_line_balances_i b ,
gl_budget_versions bv,
gl_budgets_v gv
where
b.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and bl.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and
bl.budget_version_id =bv.BUDGET_VERSION_ID and gv.budget_version_id= bv.budget_version_id
and gv.latest_opened_year in (select latest_opened_year-3 from gl_budgets_v where latest_opened_year=:BUDGET_YEAR )
group by b.Concatenated_account ,b.Account_descriptionHi,
If this question is related to SQL then please post in SQL forum.
Otherwise provide more information how this sql is being used and do you want to tune the SQL or the way it fetches the information from DB and display in OAF.
Regards,
Sandeep M. -
Takes more time to start & shutdown the database
Hi All,
I have created a database in oracle9i by following manual steps. Every thing was created successfully and am able to start the database and shutdown also.
but the problem is while giving the startup command it takes more time to start the database and the same during the shutdown. So anyone help me..
the follwing are the pfile specifications:
db_name=practice
instance_name=practice
control_files= 'E:\practice\control\control1.ctl',
'D:\practice\control\control2.ctl'
db_block_size=2048
db_cache_size=20m
shared_pool_size=20m
background_dump_dest='E:\practice\bdump'
user_dump_dest='E:\practice\udump'
Thanks in AdvanceEvery thing was created successfully and am able to start the database and > shutdown also.Please restate the above.
problem is while giving the startup command it takes more time to start the >database and the same during the shutdownHow have you compared? Could it be O/S resources, installation of additional software; you have not mentioned the O/S and complete version of your database.
You can review the following although I am bit unclear;
http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#440322
Adith -
PDF Form Takes More Time To Open when using designer 7.1.3129.1.296948
Hi All,
Adobe Reader Version : 8 and above.
designer : 7.1.3129.1.296948
When i am devloping the adobe interactive form Using designer 7.1.3129.1.296948, When I open the same in the adobe reader 8.1.2 and 9.1 its take more time (Nearly 20 mins).
When I am opening the same in the adobe reader 7.1 its opening fine.
How to resolve This problem in adobe reader 8.1.2,9.0 and 9.1 ?
Regards,
Boopathi MHi,
I have seen this exact same problem happening when, I created/developed a adobe form, on a PC which had adobe livecycle designer 7.1, but had adobe reader 7.
Once the form is created on a machine which had reader 7, then it does not matter whether u try to open that pdf in reader 8 or 9, it will take 20-30min to open, it will freeze your pc, etc.
Please ensure that when/where ever the form was first created, that machine had adobe reader 8 or higher installed it.
I hope this helps,
Regards,
Hanoz -
Hi All,
When i am developing the adobe interactive form Using designer 7.1.3129.1.296948,After that I converted to PDF.
When I am opening the PDF form its takes more time(Using reader version 8.1.2).
How to resolve This problem ?
Regards,
Boopathi MHi,
I have seen this exact same problem happening when, I created/developed a adobe form, on a PC which had adobe livecycle designer 7.1, but had adobe reader 7.
Once the form is created on a machine which had reader 7, then it does not matter whether u try to open that pdf in reader 8 or 9, it will take 20-30min to open, it will freeze your pc, etc.
Please ensure that when/where ever the form was first created, that machine had adobe reader 8 or higher installed it.
I hope this helps,
Regards,
Hanoz -
'BAPI_GOODSMVT_CREATE' takes more time for creating material document
Hi Experts,
I m using 'BAPI_GOODSMVT_CREATE' in my custom report, it takes more time for creating Material documents.
Please let me know if there is any option to overcome this issue.
Thanks in advance
Regards,
LeoHi,
please check if some of following OSS notes are not valid for your problem:
[Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
[Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
[Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
Regards
Adrian -
'BAPI_GOODSMVT_CREATE' takes more time for creating material document for the 1st time
Hi Experts,
I am doing goods movement using BAPI_GOODSMVT_CREATE in my custom code.
Then there is some functional configuration such that, material documents and TR and TO are getting created.
Now I need to get TO and TR numbers from LTAK table passing material documnt number and year, which I got from above used BAPI.
The problem I am facing is very strange.
Only for the 1st time, I am not finding TR and TO values in LTAK table. And subsequent runs I get entries in LTAK in there is a wait time of 5 seconds after bapi call.
I have found 'BAPI_GOODSMVT_CREATE' takes more time for creating material document with similar issue, but no solution or explanation.
Note 838036 says something similar, but it seems obsolete.
Kindly share your expertise and opinions.
Thanks,
AnilHi,
please check if some of following OSS notes are not valid for your problem:
[Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
[Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
[Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
Regards
Adrian -
Hi All,
I have cloned KSB1 tcode to custom one as required by business.
Below query takes more time than excepted.
Here V_DB_TABLE = COVP.
Values in Where clause are as follows
OBNJR in ( KSBB010000001224 BT KSBB012157221571)
GJAHR in blank
VERSN in '000'
WRTTP in '04' and '11'
all others are blank
VT_VAR_COND = ( CPUDT BETWEEN '20091201' and '20091208' )
SELECT (VT_FIELDS) INTO CORRESPONDING FIELDS OF GS_COVP_EXT
FROM (V_DB_TABLE)
WHERE LEDNR = '00'
AND OBJNR IN LR_OBJNR
AND GJAHR IN GR_GJAHR
AND VERSN IN GR_VERSN
AND WRTTP IN GR_WRTTP
AND KSTAR IN LR_KSTAR
AND PERIO IN GR_PERIO
AND BUDAT IN GR_BUDAT
AND PAROB IN GR_PAROB
AND (VT_VAR_COND).
Checked in table for this condition it has only 92 entries.
But when i execute program takes long time as 3 Hrs.
Could any one help me on this>1.Dont use SELECT/ENDSELECT instead use INTO TABLE addition .
> 2.Avoid using corresponding addition.create a type and reference it.
> If the select is going for dump beacause of storage limitations ,then use Cursors.
you got three large NOs .... all three recommendations are wrong!
The SE16 test is going in the right direction ... but what was filled. Nobody knows!!!!
Select options:
Did you ever try to trace the SE16? The generic statement has for every field an in-condition!
Without the information what was actually filled, nobody can say something there
are at least 2**n combinations possible!
Use ST05 for SE16 and check actual statement plus explain!
Maybe you are looking for
-
All my contacts are now in my lovely new IPad but they have been whisked out of my PC and I need them back. Is this possible-hope so??
-
Powershell: How to change the header name in a imported CSV file
HI All, I have a csv file in which I want to change the headers names in powershell. oldnames are name,id newnames I want to give are: company and transit respectively. Following is what I wrote in script: $a = import-csv .\finalexam\emplo
-
Dear All, I have a question regarding validation (tcode OB28). While postinf a document I got to know that a validation rule is set on document type KX and KE that "Posting Date lower than Last Posted Document is not allowed". I check in tcode OB28 a
-
How can I stop QuickTime from streaming all internet videos?
HI When I updated my ITunes all the media file associations were changed to QuickTime. I want the Windows Media Player to keep the associations. What I did so far is: I have disabled all associations in QuickTime player options (except .mov) and turn
-
Hyper-V on Windows 8.1 is messing with MAC address when sharing my Wifi card
Hello, I'm using Hyper-V on Windows 8.1 to build a complete SharePoint Development environment. All of my VM are following this scheme: a "local" private network shared between the host and all VM, using a static IP address plan (192.168.10.0/24) a R