Performance issue- urgent plz
we did a upgrade from 9.2.0.8 to 10.2.0.4 in aix 5l (64 bit)
after thae we experience slow in performance. te frontend application Peoplesoft
kindly help me in getting this resolved
thanks in advance
- What environment are we talking about-- dev, test, production, etc? If it is not the development environment, did you experience the same performance problems when you upgraded the lower environments?
- You are using a packaged application (Peoplesoft). Have you checked the Peoplesoft documentation for whatever Peoplesoft application you are using to see what parameters are suggested for a 10.2.0.4 database? That's probably different than the recommendations for a 9.2.0.8 database.
- Are the statistics up to date? Did the method of gathering statistics change (10.2 will have, by default, a background job that gathers statistics overnight)?
- Do you have an example of a query that is slow in 10.2 that is fast in 9.2 (for some definition of slow and fast)? Can you post both query plans?
Justin
Similar Messages
-
Oracle Forms6i Query Performance issue - Urgent
Hi All,
I'm using oracle forms6i and Oracle DB 9i.
I'm facing the performance issue in query forms.
In detail block form taking long time to load the data.
Form contains 2 non data blocks
1.HDR - 3 input parameters
2.DETAILS - Grid - Details
HDR input fields
1.Company Code
2.Company ACccount No
3.Customer Name
Details Grid is displayed the details.
Here there are 2 tables involved
1.Table1 - 1 crore records
2.Table2 - 4 crore records
In form procedure one cursor bulid and fetch is done directly and assign the values to form block fields.
Below i've pasted the query
SELECT
t1.entry_dt,
t2.authoriser_code,
t1.company_code,
t1.company_ac_no
initcap(t1.customer_name) cust_name,
t2.agreement_no
t1.customer_id
FROM
table1 t1,
table2 t2
WHERE
(t2.trans_no = t1.trans_no or t2.temp_trans_no = t1.trans_no)
AND t1.company_code = nvl(:hdr.l_company_code,t1.company_code)
AND t1.company_ac_no = nvl(:hdr.l_company_ac_no,t1.company_ac_no)
AND lower(t1.customer_name) LIKE lower(nvl('%'||:hdr.l_customer_name||'%' ,t1.customer_name))
GROUP BY
t2.authoriser_code,
t1.company_code,
t1.company_ac_no,
t1.customer_name,
t2.agreement_no,
t1.customer_id;
Where Clause Analysis
1.Condition 1 OR operator (In table2 two different columbs are compared with one column in table)
2.Like Operator
3.All the columns has index but not used properly always full table scan
4.NVL chk
5.If i run the qry in backend means coming little fast,front end very slow
Input Parameter - Query retrival data - limit
Only compnay code means record count will be 50 - 500 records -
Only compnay code and comp ac number means record count will be 1-5
Only compnay code,omp ac number and customer name means record count will be 1 - 5 records
I have tried following ways
1.Split the query using UNIOIN (OR clause seaparted) - Nested loops COST 850 , Nested loops COST 750 - index by row id - cost is 160 ,index by row id - cost is 152 full table access.................................
2.Dynamic SQL build - 'DBMS_SQL.DEFINE COLUMN .....
3.Given onlu one input parameter - Nested loops COST 780 , Nested loops COST 780 - index by row id - cost is 148 ,index by row id - cost is 152 full table access.................................
Still im facing the same issue.
Please help me out on this.
Thanks and Regards,
Oracle1001Sudhakar P wrote:
the below query its take more than one minute while updating the records through pro*c.
Execute 562238 161.03 174.15 7 3932677 2274833 562238Hi Sudhakar,
If the database is capable of executing 562,238 update statements in one minute, then that's pretty good, don't you think.
Your real problem is in the application code which probably looks something like this in pseudocode:
for i in (some set containing 562,238 rows)
loop
<your update statement with all the bind variables>
end loop;If you transform your code to do a single update statement, you'll gain a lot of seconds.
Regards,
Rob. -
*** performance issue, advice plz...
Hi Experts,
I have a z-table1 in which I have some fileds, I have wrote the select query to fetch the data and got data too, now from this internal table I have to pass a field called SERIAL_NUM to another z-table2, to fetch the CO documents/items.
But in the z-table2 here SERIAL_NUM field is not a key field nor it is having the index, so it is taking time in minutes even to fetch CO doc number/items for 10 serial numbers also.
Here the z-table2 is having abt 42 lacs of records, I guess even to search the doc numbers for this 10 SERIAL_NUM it is looking each time 42 lacs records as a result it is taking time in mins.
Sample code as follows-
LOOP AT it_zeps06 INTO wa_zeps06.
SELECT SINGLE
belnr
buzei
billcode
vbeln
INTO (ws_belnr, ws_buzei, ws_billcode, ws_vbeln)
FROM zeps03
WHERE serialnum = wa_zeps06-serial_no.
Endloop.
Plese advice me how to overcome this performance issue? Is it works if I create index on serial_num filed in z-table2? Or is there any other way?
Appreciate you response.
Regards.
DCHi,
The first issue is that you are looping over a select statement. What you could do is create another internal table with same fields as the select statement above and then use the table it_zeps06 with FOR ALL ENTRIES options and collect the serial numbers.
This should take much less time than the others. Since you are looping over a select statement, the trip from AServer to DBServer is expensive.
Thanks...
Preethanm S -
*Get MSEG data
IF i_mara[] IS NOT INITIAL.
SELECT mblnr
mjahr
zeile
bwart
matnr
werks
lgort
sobkz
bwtar
menge
wempf
bukrs
grund
INTO TABLE i_temp_mseg
FROM mseg
FOR ALL ENTRIES IN i_mara
WHERE bwart IN s_bwart AND
matnr EQ i_mara-matnr AND
werks EQ p_werks AND
lgort IN s_lgort .
IF sy-subrc EQ 0.
DELETE ADJACENT DUPLICATES FROM i_temp_mseg.
ENDIF.
ENDIF.
*Get MKPF data
IF i_temp_mseg[] IS NOT INITIAL.
SELECT mblnr
mjahr
budat
usnam
INTO TABLE i_temp_mkpf
FROM mkpf
FOR ALL ENTRIES IN i_temp_mseg
WHERE mblnr = i_temp_mseg-mblnr
AND mjahr = i_temp_mseg-mjahr
AND budat IN s_budat.
IF sy-subrc EQ 0.
DELETE ADJACENT DUPLICATES FROM i_temp_mkpf.
ENDIF.
ENDIF.
the data these tables are fetching consists of large amount of data so its taking lot of time , can any one help me out in this how to increase the performance for this two select statements.Hi,
first select from secondary material index table BSIM which has all nenecary field as primary keys:
MATNR Material Number
BWKEY Valuation area
BWTAR Valuation Type
BELNR Accounting Document Number
GJAHR Fiscal Year
BUZEI Number of Line Item Within Accounting Document
With the result fetch the data from MSEG and MKPF using FOR ALL ENTRIES.
Thats the best way to get good performance.
Regards,
Clemens -
Irecruitment external candidate issue- Urgent Plz
Hi All,
We are having irecruitment enabled for the external site visitors in the website. This is working fine with the IE7 version. Now users are using IE 8 version and they are facing some problems like not able to upload the resume.
So my question is, Is there any setups we need to do for the IE 8 version of the irecruitment version.
Thanks and Regards,
joshna.From this post i came to know that we need the form updation is required, if we update the forms versions. We may face different issues which can't be expected. So i want suggestion from you people to go forward.This is a must to meet the minimum requirements and ensure that your client browser and OS are certified with Oracle EBS.
What i suggest to the client is enable a tip message in the External site visitor page that This application is not compatible to the IE 8 for time being, but i need to fix it asap.I believe there is no other fix (or workaround). If you do not want to apply the required patches to meet the minimum requirements, then you may log a SR and see if Oracle support would help (I doubt it since they will ask it to apply those patches first).
Thanks,
Hussein -
Hi All,
I am having a requirement where i have to create 2 restricted key figure for month and YTD. This has to b display as below.
Under Amount Heading V hav to get two columns ie month and YTD. how can it is possible
Amount
** Month YTD
Regards
KKHi karanam,
Thanks for ur quick response.
First i had drag and drop the amount under that i had created the structure but when i try to drag n drop the month and ytd RKF they are coming under the key figure only not under the structure if iam missing some steps plz let me know
Regards -
Hi ,
I have extended AM and deployed the AM thru Substitutions and bounced apache too.
BUt i face an Error ,ie
oracle.apps.fnd.framework.OAException: The Application Module found by the usage name HzPuiAddressAM has a definition name &AM_DEFINITION_NAME different than &DEFINITION_NAME specified for the region.
at oracle.apps.fnd.framework.webui.OAPageErrorHandler.prepareException(OAPageErrorHandler.java:1223)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1960)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:497)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:418)
at oa_html._OA._jspService(_OA.java:88)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at oracle.jsp.provider.Jsp20RequestDispatcher.forward(Jsp20RequestDispatcher.java:162)
at oracle.jsp.runtime.OraclePageContext.forward(OraclePageContext.java:187)
at oa_html._OA._jspService(_OA.java:98)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:570)
## Detail 0 ##
oracle.apps.fnd.framework.OAException: The Application Module found by the usage name HzPuiAddressAM has a definition name &AM_DEFINITION_NAME different than &DEFINITION_NAME specified for the region.
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.createApplicationModule(OAWebBeanContainerHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.getApplicationModule(OAWebBeanHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.getDataObjectName(OAWebBeanHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.beans.form.OAFormValueBean.getDataObjectName(OAFormValueBean.java:370)
at oracle.apps.fnd.framework.webui.OAPageBean.createAndAddHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.processRequest(OAPageBean.java:2377)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1711)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:497)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:418)
at oa_html._OA._jspService(_OA.java:88)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at oracle.jsp.provider.Jsp20RequestDispatcher.forward(Jsp20RequestDispatcher.java:162)
at oracle.jsp.runtime.OraclePageContext.forward(OraclePageContext.java:187)
at oa_html._OA._jspService(_OA.java:98)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:570)
oracle.apps.fnd.framework.OAException: The Application Module found by the usage name HzPuiAddressAM has a definition name &AM_DEFINITION_NAME different than &DEFINITION_NAME specified for the region.
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.createApplicationModule(OAWebBeanContainerHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.getApplicationModule(OAWebBeanHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.getDataObjectName(OAWebBeanHelper.java(Compiled Code))
at oracle.apps.fnd.framework.webui.beans.form.OAFormValueBean.getDataObjectName(OAFormValueBean.java:370)
at oracle.apps.fnd.framework.webui.OAPageBean.createAndAddHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.addHiddenPKFields(OAPageBean.java(Compiled Code))
at oracle.apps.fnd.framework.webui.OAPageBean.processRequest(OAPageBean.java:2377)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1711)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:497)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:418)
at oa_html._OA._jspService(_OA.java:88)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at oracle.jsp.provider.Jsp20RequestDispatcher.forward(Jsp20RequestDispatcher.java:162)
at oracle.jsp.runtime.OraclePageContext.forward(OraclePageContext.java:187)
at oa_html._OA._jspService(_OA.java:98)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:570)
What might be the issue...Any steps i missed here.rbojja ,
Please go through Dev guide, there is a bug in OAF, that root AM cannot be subsituted ! Check if your AM is a root AM , if yes, then you would either have to extend your controller, or add a nested AM.
--Mukul -
i have a problem in my winxp logon, it turned to the windows 2000 classic type, instead of the winxp style, can anyone tell me how to revert it to the win xp style plz.
Control Panel -> User Accounts -> Pick a Task.... -> Change the way users log on or off -> check the Use Welcome Screen and apply
-
Performance Issues - Urgent Help Required.
Hi All,
I have some performance problems on the application that is running on WLS 6.1
SP2.
1. I have a UI (jsp), in which the user can select either single or multiple product
IDs. If the user just selects one product ID and then submit the request to the
server the response is faster. However when the user submits multiple product
IDs, all these are submitted to the server and it takes a longer response time.
I was just thinking of generating multiple requests in batches (say 5) to optimise
the performance. If I submit multiple requests, how will i be able to manage the
response and display in the presentation layer. Any comments / suggestions are
welcome.
2. In my application, I have data access objects which fetches the data from the
Database. After the rows are fetched, these data is processed by business process
which converts them to java objects and returns them to clients. The business
process of converting the data fetched from Database to java objects is slow.
We need to improve the response time. What do we need to do?.
3. Is there any performance benchmarking info / comparision available between
WeblogicServer 6.1 and WebLogicServer 7 or any higher versions?.
Will appreciate any help to address the above problem.
Regards,
RengaTo Really figure whats going on, you need to get some performance numbers,
more specifically where things are slowing down. It could be anywhere from a
sql query thats doing a full table scan verses using index, or some method
somewhere slowing things down, or even something slow on the network. You
might want to try something like JProbe or OptimizeIt that can profile your
application and give you slow methods/modules of your application.
Haider
"Renganathan" <[email protected]> wrote in message
news:[email protected]..
>
Hi All,
I have some performance problems on the application that is running on WLS6.1
SP2.
1. I have a UI (jsp), in which the user can select either single ormultiple product
IDs. If the user just selects one product ID and then submit the requestto the
server the response is faster. However when the user submits multipleproduct
IDs, all these are submitted to the server and it takes a longer responsetime.
I was just thinking of generating multiple requests in batches (say 5) tooptimise
the performance. If I submit multiple requests, how will i be able tomanage the
response and display in the presentation layer. Any comments / suggestionsare
welcome.
2. In my application, I have data access objects which fetches the datafrom the
Database. After the rows are fetched, these data is processed by businessprocess
which converts them to java objects and returns them to clients. Thebusiness
process of converting the data fetched from Database to java objects isslow.
We need to improve the response time. What do we need to do?.
3. Is there any performance benchmarking info / comparision availablebetween
WeblogicServer 6.1 and WebLogicServer 7 or any higher versions?.
Will appreciate any help to address the above problem.
Regards,
Renga -
Module pool issue (Urgent Plz Help)
Hi All,
I have a requirement in which when i click on the image in the module pool then i have to navigate to some other screen but
1. how could i know that the user has clicked the image ?
2. What is the use of PF_STATUS?
Please help me out i have already loaded the image in my module pool screen but don't know how to go further please help me as soon as possible.
Thanks in advance
Message was edited by:
Rachit KhannaDialog Status for Lists
To allow the user to communicate with the system when a list is displayed, the lists must be able to direct user actions to the ABAP program. As described in User Actions on Screens, function codes are used to do this. Function codes are maintained in the GUI status of the list screen. You define a GUI status using the Menu Painter tool in the ABAP Workbench. The system assigns function codes to list-specific user actions.
The most important of these functions is for selecting list lines by double-clicking. As described in Using a GUI Status, the double-click function is always linked to the F2 key. If a function code is assigned to the F2 key in the GUI status, it will be triggered when you double-click.
The Standard List Status
As with normal screens, you can define your own GUI status for lists and attach it to a list level using the SET PF-STATUS statement. If you do not set a particular GUI status, the system sets a default list status for the list screen in an executable program. In other programs, for example, when you call a list from screen processing, you must set this status explicitly using the statement
SET PF-STATUS space.
This default interface always contains at least the functions described in the Standard List section.
Unlike normal dialog statuses, the default list status is affected by the ABAP program.
If you define event blocks in your program using the event keywords AT LINE-SELECTION or AT PF]
This statement sets the status parameters. To display an ampersand character &, repeat it in the title &&.
Examples
Example for dialog status in a list.
REPORT demo_list_menu_painter.
START-OF-SELECTION.
SET PF-STATUS 'TEST'.
WRITE: 'Basic list, SY-LSIND =', sy-lsind.
AT LINE-SELECTION.
WRITE: 'LINE-SELECTION, SY-LSIND =', sy-lsind.
AT USER-COMMAND.
CASE sy-ucomm.
WHEN 'TEST'.
WRITE: 'TEST, SY-LSIND =', sy-lsind.
ENDCASE.
This program uses a status TEST, defined in the Menu Painter.
Function key F5 has the function code TEST and the text Test for demo.
Function code TEST is entered in the List menu.
The function codes PICK and TEST are assigned to pushbuttons.
The user can trigger the AT USER-COMMAND event either by pressing F5 , or by choosing List ® Test for demo, or by choosing the pushbutton Test for demo.The user can trigger the AT LINE-SELECTION event by selecting a line.
Example of setting a dialog status for the current list
REPORT demo_list_set_pf_status_1.
DATA: fcode TYPE TABLE OF sy-ucomm,
wa_fcode TYPE sy-ucomm.
START-OF-SELECTION.
wa_fcode = 'FC1 '. APPEND wa_fcode TO fcode.
wa_fcode = 'FC2 '. APPEND wa_fcode TO fcode.
wa_fcode = 'FC3 '. APPEND wa_fcode TO fcode.
wa_fcode = 'FC4 '. APPEND wa_fcode TO fcode.
wa_fcode = 'FC5 '. APPEND wa_fcode TO fcode.
wa_fcode = 'PICK'. APPEND wa_fcode TO fcode.
SET PF-STATUS 'TEST'.
WRITE: 'PF-Status:', sy-pfkey.
AT LINE-SELECTION.
IF sy-lsind = 20.
SET PF-STATUS 'TEST' EXCLUDING fcode.
ENDIF.
WRITE: 'Line-Selection, SY-LSIND:', sy-lsind,
/ ' SY-PFKEY:', sy-pfkey.
AT USER-COMMAND.
IF sy-lsind = 20.
SET PF-STATUS 'TEST' EXCLUDING fcode.
ENDIF.
WRITE: 'User-Command, SY-LSIND:', sy-lsind,
/ ' SY-UCOMM:', sy-ucomm,
/ ' SY-PFKEY:', sy-pfkey.
Suppose that the function codes FC1 to FC5 are defined in the status TEST and assigned to pushbuttons: The function code PICK is assigned to function key F2 .
When the program starts, the user can create detail lists by selecting a line or choosing one of the function codes FC1 to FC5. For all secondary lists up to level 20, the user interface TEST is the same as for the basic list:
On list level 20, EXCLUDING ITAB deactivates all function codes that create detail lists. This prevents the user from causing a program termination by trying to create detail list number 21.
Example of setting a dialog status for the current list
REPORT demo_list_set_pf_status_2.
START-OF-SELECTION.
WRITE: 'SY-LSIND:', sy-lsind.
AT LINE-SELECTION.
SET PF-STATUS 'TEST' IMMEDIATELY.
After executing the program, the output screen shows the basic list and the user interface predefined for line selection (with function code PICK).
When you choose Choose, the user interface changes. However, since the AT LINE-SELECTION processing block does not contain an output statement, the system does not create a detail list: The status TEST is defined as in the previous example.
Example: Titles of detail lists.
REPORT demo_list_title .
START-OF-SELECTION.
WRITE 'Click me!' HOTSPOT COLOR 5 INVERSE ON.
AT LINE-SELECTION.
SET TITLEBAR 'TIT' WITH sy-lsind.
WRITE 'Click again!' HOTSPOT COLOR 5 INVERSE ON.
In this program, a new title is set for each detail list. The title is defined as follows: "Title for Detail List &1". -
E71x startup issue - Urgent Plz ! !
hello
i installed some app through ovi store (the last one was 'nokia messaging'), once installed you are asked to restart device, i did but nothing is loaded!! and device is blocked !!..i can just enter pic code then..nothing can load..some seconds later it blocked
please help meControl Panel -> User Accounts -> Pick a Task.... -> Change the way users log on or off -> check the Use Welcome Screen and apply
-
Urgent : general abap performance issue
HI floks
i did some development in new smartform its working fine but i have issue in data base performance is 76% . but i utilize similar below code with various conditions in various 12 places . is it possible to reduce performance this type of code . check it and mail me how can i do it . if possible can suggest me fast .how much % is best for this type of performance issues.
DATA : BEGIN OF ITVBRPC OCCURS 0,
LV_POSNR LIKE VBRP-POSNR,
END OF ITVBRPC.
DATA : BEGIN OF ITKONVC OCCURS 0,
LV_KNUMH LIKE KONV-KNUMH,
LV_KSCHL LIKE KONV-KSCHL,
END OF ITKONVC.
DATA: BEGIN OF ITKONHC OCCURS 0,
LV_KNUMH LIKE KONH-KNUMH,
LV_KSCHL LIKE KONH-KSCHL,
LV_KZUST LIKE KONH-KZUST,
END OF ITKONHC.
DATA: BEGIN OF ITKONVC1 OCCURS 0,
LV_KWERT LIKE KONV-KWERT,
END OF ITKONVC1.
DATA : BEGIN OF ITCALCC OCCURS 0,
LV_KWERT LIKE KONV-KWERT,
END OF ITCALCC.
DATA: COUNTC(3) TYPE n,
TOTALC LIKE KONV-KWERT.
SELECT POSNR FROM VBRP INTO ITVBRPC
WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
APPEND ITVBRPC.
ENDSELECT.
LOOP AT ITVBRPC.
SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
APPEND ITKONVC.
ENDSELECT.
ENDLOOP.
SORT ITKONVC BY LV_KNUMH.
DELETE ADJACENT DUPLICATES FROM ITKONVC.
LOOP AT ITKONVC.
SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
APPEND ITKONHC.
ENDSELECT.
ENDLOOP.
LOOP AT ITKONHC.
SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH AND
KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
APPEND ITCALCC.
ENDSELECT.
endloop.
LOOP AT ITCALCC.
COUNTC = COUNTC + 1.
TOTALC = TOTALC + ITCALCC-LV_KWERT.
ENDLOOP.
MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
MOVE TOTALC TO LV_CKWERT.
it's urgent ..........
thanks .
bbbbye
sureshYou need to use for all entries instead of select inside the loop.
Try this:
DATA : BEGIN OF ITVBRPC OCCURS 0,
VBELN LIKE VBRP-VBELN,
LV_POSNR LIKE VBRP-POSNR,
END OF ITVBRPC.
DATA: IT_VBRPC_TMP like ITVBRPC occurs 0 with header line.
DATA : BEGIN OF ITKONVC OCCURS 0,
LV_KNUMH LIKE KONV-KNUMH,
LV_KSCHL LIKE KONV-KSCHL,
END OF ITKONVC.
DATA: BEGIN OF ITKONHC OCCURS 0,
LV_KNUMH LIKE KONH-KNUMH,
LV_KSCHL LIKE KONH-KSCHL,
LV_KZUST LIKE KONH-KZUST,
END OF ITKONHC.
DATA: BEGIN OF ITKONVC1 OCCURS 0,
KNUMH LIKE KONV-KNUMH,
KSCHL LIKE KONV- KSCHL,
LV_KWERT LIKE KONV-KWERT,
END OF ITKONVC1.
DATA : BEGIN OF ITCALCC OCCURS 0,
LV_KWERT LIKE KONV-KWERT,
END OF ITCALCC.
DATA: COUNTC(3) TYPE n,
TOTALC LIKE KONV-KWERT.
*SELECT POSNR FROM VBRP INTO ITVBRPC
*WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
*APPEND ITVBRPC.
*ENDSELECT.
SELECT VBELN POSNR FROM VBRP INTO TABLE ITVBRPC
WHERE VBELN = INV_HEADER-VBELN AND
ARKTX = WA_INVDATA-ARKTX .
If sy-subrc eq 0.
IT_VBRPC_TMP[] = ITVBRPC[].
Sort IT_VBRPC_TMP by vbeln posnr.
Delete adjacent duplicates from IT_VBRPC_TMP comparing vbeln posnr.
SELECT KNUMH KSCHL FROM KONV
INTO TABLE ITKONVC
WHERE KNUMV = LV_VBRK-KNUMV AND
KPOSN = ITVBRPC-LV_POSNR AND
KSCHL = 'ZLAC'.
if sy-subrc eq 0.
SORT ITKONVC BY LV_KNUMH.
DELETE ADJACENT DUPLICATES FROM ITKONVC COMPARING LV_KNUMH.
SELECT KNUMH KSCHL KZUST FROM KONH
INTO TABLE ITKONHC
WHERE KNUMH = ITKONVC-LV_KNUMH AND
KSCHL = 'ZLAC' AND
KZUST = 'Z02'.
if sy-subrc eq 0.
SELECT KNUMH KSCHL KWERT FROM KONV
INTO TABLE ITKONVC1
WHERE KNUMH = ITKONHC-LV_KNUMH AND
KSCHL = ITKONHC-LV_KSCHL AND
KNUMV = LV_VBRK-KNUMV.
Endif.
Endif.
Endif.
*LOOP AT ITVBRPC.
*SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
*LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
*APPEND ITKONVC.
*ENDSELECT.
*ENDLOOP.
*SORT ITKONVC BY LV_KNUMH.
*DELETE ADJACENT DUPLICATES FROM ITKONVC.
*LOOP AT ITKONVC.
SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
*APPEND ITKONHC.
*ENDSELECT.
*ENDLOOP.
*LOOP AT ITKONHC.
*SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH *AND
*KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
*MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
*APPEND ITCALCC.
*ENDSELECT.
*endloop.
LOOP AT ITCALCC.
COUNTC = COUNTC + 1.
TOTALC = TOTALC + ITCALCC-LV_KWERT.
ENDLOOP.
MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
MOVE TOTALC TO LV_CKWERT. -
Hi,
Can someone please tell me and explain me that if i have performance issue at the portal side then what cache settings i have to choose at rsrt and then at the reporting agent level.
I will appreciate if you can explain me also in detail.
I will appreciate and award points.
Thanks
SakshiHi Sakshi,
For your question pls read the following document carefully, so you will know the correct information,
regarding your question.
<b>Read Mode</b>
The read mode determines how the OLAP processor gets data during navigation. Three alternatives are supported:
1. Read when navigating/expanding the hierarchy
In this method, the system transports the smallest amount of data from the database to the OLAP processor but the number of read processes is the largest.
In the "Read when navigating" mode below, data is requested in a hierarchy drilldown for the fully expanded hierarchy. In the "Read when navigating/expanding the hierarchy" mode, data in the hierarchy is aggregated by the database and transferred to the OLAP processor from the lowest hierarchy level displayed in the start list. When expanding a hierarchy node, the system intentionally reads this node's children.
You can improve the performance of queries with large presentation hierarchies by creating aggregates in a middle hierarchy level that is greater than or equal to the start level.
2. Read when navigating
The OLAP processor only requests the data required for each query navigation status in the Business Explorer. The data required is read for each navigation step.
In contrast to the "Read when navigating/expanding the hierarchy" mode, the system always fully reads presentation hierarchies at tree level.
When expanding nodes, the OLAP processor can read the data from the main memory.
When accessing the database, the system uses the most suitable aggregate table and, if possible, aggregates in the database itself.
3. Read everything at once
There is only one read process in this mode. When executing the query in the Business Explorer, the data required for all possible navigation steps for this query is read to the OLAP processor's main memory area. When navigating, all new navigation statuses are aggregated and calculated from the main memory data.
The "Read when navigating/expanding the hierarchy" mode has a markedly better performance in almost all cases than the other two modes. This is because the system only requests the data that the user wants to see in this mode.
The "Read when navigating" setting, in contrast to "Read when navigating/expanding the hierarchy", only has a better performance for queries with presentation hierarchies.
In contrast to the two previous modes, the "Read everything at once" setting also has a better performance with queries with free characteristics. The idea behind aggregates, that is working with pre-aggregated data, is least supported in the "Read everything at once" mode. This is because the OLAP processor carries out aggregation in each query view.
We recommend you choose the "Read when navigating/ expanding the hierarchy" mode.
Only use different mode to "Read when navigating/ expanding the hierarchy" in exceptional circumstances.
The "Read everything at once" mode can be useful in the following cases:
The InfoProvider does not support selection, meaning the OLAP processor reads significantly more data than the query needs anyway.
A user exit is active in the query that prevents the system from having already aggregated in the database.
If it is Useful informatin to u pls provide points.
have nice day.
by
ANR -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Performance ISSUE related to AGGREGATE
hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
Its very urgent .plz.........respond to my issue.its a urgent request.
any link any thing plz send me
my mail id is
[email protected]Hi,
Try this.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
/people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
/people/prakash.darji/blog/2006/01/26/query-optimization
Cube Performance
/thread/785462 [original link is broken]
Thanks,
JituK
Maybe you are looking for
-
I have a SQL 2008 R2 database that I am trying to move to 2012. Made a full backup of the database in 2008 R2. Recovery model is Simple, compatibility level is 2008 Went to 2012, created a dummy database, did Tasks--> Restore --> Database , locate
-
Unable to post multiple files to the same receiver.
Hi Experts, I am working on a file 2 file scenario.In which one file is divided into three different files and these three files are posted to the same receiver using different communication channels(because for all the files content conversion param
-
Where can I buy the Orange U300s?
Is anyone stocking the U300s yet? Also, is it just me or is the subject column in this forum area to narrow?
-
Does adobe reader for ios support interactive pdfs?
I have a few i teractive pdfs, but there is no good app to play these on my ipad. Will these interactive pdfs work with this app?
-
After I clicked erase all data on my iPhone 4S it has been on the shutting down screen for over 10 minutes.. Will it ever cut back on?