Analyze a Query which takes longer time in Production server with ST03 only
Hi,
I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
Please provide me with detail steps as to perform the same with ST03
ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
Write steps in detail please.
<REMOVED BY MODERATOR>
Regards,
Sameer
Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PM
Then please close the thread.
Greetings,
Blag.
Similar Messages
-
How to tune this SQL (takes long time to come up with results)
Dear all,
I have sum SQL which takes long time ... can any one help me to tune this.... thank You
SELECT SUM (n_amount)
FROM (SELECT DECODE (v_payment_type,
'D', n_amount,
'C', -n_amount
) n_amount, v_vou_no
FROM vouch_det a, temp_global_temp b
WHERE a.v_vou_no = TO_CHAR (b.n_column2)
AND b.n_column1 = :b5
AND b.v_column1 IN (:b4, :b3)
AND v_desc IN (SELECT v_trans_source_code
FROM benefit_trans_source
WHERE v_income_tax_app = :b6)
AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1)
UNION ALL
SELECT DECODE (v_payment_type,
'D', n_amount,
'C', -n_amount
* -1 AS n_amount,
v_vou_no
FROM vouch_details a, temp_global_temp b
WHERE a.v_vou_no = TO_CHAR (b.n_column2)
AND b.n_column1 = :b5
AND b.v_column1 IN (:b12, :b11, :b10, :b9, :b8, :b7)
AND v_desc IN (SELECT v_trans_source_code
FROM benefit_trans_source
WHERE income_tax_app = :b6)
AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1));
Thank You.....Thanks a lot,
i did change the SQL it works fine but slows down my main query.... actually my main query is calling a function which does the sum......
here is the query.....?
select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code,
a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
PKG_AGE__TAX.GET_TAX_AMT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO) comm,
c.v_ird_region
FROM agent_master a, agent_lob b, agency_region c
WHERE a.n_agent_no = b.n_agent_no
AND a.v_agency_region = c.v_agency_region
AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
c.v_ird_region
ORDER BY c.v_ird_region, a.v_agent_code DESC)
A
WHERE (COMM < :P_VAL_IND OR COMM >=:P_VAL_IND1);
Any idea to make this faster....
Thank You... -
Query Prediction takes long time - After upgrade DB 9i to 10g
Hi all, Thanks for all your help.
we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
Thanks in advance
skatHi skat
Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
Best wishes
Michael -
Query Saving takes long time and giving error
Hi Gurus,
I am creating one query that have lot of calculations (CKF & RKF).
When I am trying to save this query it is taking long time and it is giving error like RFC_ERROR_SYSTEM_FAILURE, Query Designer must be restarted, further work not possible.
Please give me the solution for this.
Thanks,
RChowdaryHi Chowdary,
Check the following note: 316470.
https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=316470
The note details are:
Symptom
There are no authorizations to change roles. Consequently, the system displays no roles when you save workbooks in the BEx Analyzer. In the BEx browser, you cannot move or change workbooks, documents, folders and so on.
Other terms
BW 2.0B, 2.0A, 20A, 20B, frontend, error 172, Business Explorer,
RFC_ERROR_SYSTEM_FAILURE, NOT_AUTHORIZED, S_USER_TCD, RAISE_EXCEPTION,
LPRGN_STRUCTUREU04, SAPLPRGN_STRUCTURE, PRGN_STRU_SAVE_NODES
Reason and Prerequisites
The authorizations below are not assigned to the user.
Solution
Assign authorization for roles
To assign authorizations for a role, execute the following steps:
1. Start Transaction Role maintenance (PFCG)
2. Select a role
3. Choose the "Change" switch
4. Choose tab title "Authorizations"
5. Choose the "Change authorization data" switch
6. Choose "+ Manually" switch
7. Enter "S_USER_AGR" as "Authorization object"
8. Expand "Basis: Administration"/"Authorization: Role check""
9. From "Activity" select "Create or generate" and others like "Display" or "Change"
10. Under "Role Name", enter all roles that are supposed to be shown or changed. Enter "*" for all roles.
11. You can re-enter authorization object "S_USER_AGR" for other activities.
Assign authorization for transactions
If a user is granted the authorization for changing a role, he/she should also be granted the authorization for all transactions contained in the role. Add these transaction codes to authorization object S_USER_TCD.
1. Start the role maintenance transaction (PFCG).
2. Select a role.
3. Click on "Change".
4. Choose the "Authorizations" tab.
5. Click on "Change authorization data".
6. Click on "+ manually".
7. Specify "S_USER_TCD" as "Authorization object".
8. Expand "Basis - Administration"/"Authorizations: Transactions in Roles".
9. Under "Transaction", choose at least "RRMX" (for BW reports), "SAP_BW_TEMPLATE" (for BW Web Templates), "SAP_BW_QUERY" (for BW Queries and/or "SAP_BW_CRYSTAL" (for Crystal reports) or "*". Values with "SAP_BW_..." are not transactions, they are special node types (see transaction code NODE_TYPE_DEFINITION).
Using the SAP System Trace (Transaction ST01), you can identify the transaction that causes NOT_AUTHORIZED.
Prevent user assignment
Having the authorization for changing roles, the user is not only able to change the menu but also to assign users. If you want to prevent the latter, the user must loose the authorization for Transactions User Maintenance (SU01) and Role maintenance (PFCG).
Z1>Note
Refer to Note 197601, which provides information on the different display of BEx Browser, BEx Analyzer and Easy Access menu.
Please refer to Note 373979 about authorizations to save workbooks.
Check in the transaction ST22 for more details on the Query designer failure or query log file.
With Regards,
Ravi Kanth.
Edited by: Ravi kanth on Apr 9, 2009 6:02 PM -
Optimizing the query - which takes more time
Hi,
Am having a query which was returning the results pretty fast one week back but now the same query takes more time to respond, nothing much changed in the table data, what could be the problem. Am using IN in the where clause, whether that could be an issue? if so what is the best method of rewriting the query.
SELECT RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE,
MAX(TR.TIME_STAMP) MSG_DATE_INFO FROM TRACKING_REPORT TR, RESOURCE_INFO RI
WHERE TR.MSISDN IN ( SELECT MSISDN FROM RESOURCE_INFO WHERE GROUP_ID ='4'
AND COM_ID='12') AND RI.MSISDN = TR.MSISDN
GROUP BY RI.RESOURCE_NAME,TR.MSISDN ORDER BY MSG_DATE_INFO DESCHi
i have followed this link http://www.lorentzcenter.nl/awcourse/oracle/server.920/a96533/sqltrace.htm in enabling the trace and found out the following trace output, can you explain the problem here and its remedial action pls.
SELECT RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK
LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE, MAX(TR.TIME_STAMP)
MSG_DATE_INFO
FROM
TRACKING_REPORT TR, RESOURCE_INFO RI WHERE RI.GROUP_ID ='426' AND
RI.COM_ID='122' AND RI.MSISDN = TR.MSISDN GROUP BY RI.RESOURCE_NAME,
TR.MSISDN
call count cpu elapsed disk query current rows
Parse 1 0.01 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6 13.69 389.03 81747 280722 0 72
total 8 13.70 389.05 81747 280722 0 72
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 281
Rows Row Source Operation
72 SORT GROUP BY
276558 NESTED LOOPS
79 TABLE ACCESS FULL RESOURCE_INFO
276558 TABLE ACCESS BY INDEX ROWID TRACKING_REPORT
276558 INDEX RANGE SCAN TR_INDX_ON_MSISDN_TIME (object id 60507)
********************************************************************************and the plan_table output is
STATEMENT_ID TIMESTAMP REMARKS OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID POSITION COST CARDINALITY BYTES OTHER_TAG PARTITION_START PARTITION_STOP PARTITION_ID OTHER DISTRIBUTION CPU_COST IO_COST TEMP_SPACE ACCESS_PREDICATES FILTER_PREDICATES
23-Mar-11 23:36:45 SELECT STATEMENT CHOOSE 0 115 115 1058 111090 115
23-Mar-11 23:36:45 SORT GROUP BY 1 0 1 115 1058 111090 115
23-Mar-11 23:36:45 NESTED LOOPS 2 1 1 9 4603 483315 9
23-Mar-11 23:36:45 TABLE ACCESS FULL BSNL_RTMS RESOURCE_INFO 2 ANALYZED 3 2 1 8 1 30 8 "RI"."GROUP_ID"=426 AND "RI"."COM_ID"='122'
23-Mar-11 23:36:45 TABLE ACCESS BY INDEX ROWID BSNL_RTMS TRACKING_REPORT 1 ANALYZED 4 2 2 1 3293 246975 1
23-Mar-11 23:36:45 INDEX RANGE SCAN BSNL_RTMS TR_INDX_ON_MSISDN_TIME NON-UNIQUE 1 5 4 1 1 3293 1 "RI"."MSISDN"="TR"."MSISDN" -
DrawImage takes long time for images created with Photoshop
Hello,
I created a simple program to resize images using the drawImage method and it works very well for images except images which have either been created or modified with Photoshop 8.
The main block of my code is
public static BufferedImage scale( BufferedImage image,
int targetWidth, int targetHeight) {
int type = (image.getTransparency() == Transparency.OPAQUE) ?
BufferedImage.TYPE_INT_RGB :
BufferedImage.TYPE_INT_RGB;
BufferedImage ret = (BufferedImage) image;
BufferedImage temp = new BufferedImage(targetWidth, targetHeight, type);
Graphics2D g2 = temp.createGraphics();
g2.setRenderingHint
RenderingHints.KEY_INTERPOLATION,
RenderingHints.VALUE_INTERPOLATION_BICUBIC
g2.drawImage(ret, 0, 0, targetWidth, targetHeight, null);
g2.dispose();
ret = temp;
return ret;
}The program is a little longer, but this is the gist of it.
When I run a jpg through this program (without Photoshop modifications) , I get the following trace results (when I trace each line of the code) telling me how long each step took in milliseconds:
Temp BufferedImage: 16
createGraphics: 78
drawimage: 31
dispose: 0
However, the same image saved in Photoshop (no modifications except saving in Photohop ) gave me the following results:
Temp BufferedImage: 16
createGraphics: 78
drawimage: 27250
dispose: 0
The difference is shocking. It took the drawImage process 27 seconds to resize the file in comparison to 0.78 seconds!
My questions:
1. Why does it take so much longer for the drawImage to process the file when the file is saved in Photoshop?
2. Are there any code improvements which will speed up the image drawing?
Thanks for your help,
-RogierYou saved the file in PNG format. The default PNGImagReader in core java has a habit of occasionally returning TYPE_CUSTOM buffered images. Photoshop 8 probably saves the png file in such a way that TYPE_CUSTOM pops up more.
And when you draw a TYPE_CUSTOM buffered image onto a graphics context it almost always takes an unbearably long time.
So a quick fix would be to load the file with the Toolkit instead, and then scale that image.
Image img = Toolkit.getDefaultToolkit().createImage(/*the file*/);
new ImageIcon(img);
//send off image to be scaled A more elaborate fix involves specifying your own type of BufferedImage you want the PNGImageReader to use
ImageInputStream in = ImageIO.createImageInputStream(/*file*/);
ImageReader reader = ImageIO.getImageReaders(in).next();
reader.setInput(in,true,true);
ImageTypeSpecifier sourceImageType = reader.getImageTypes(0).next();
ImageReadParam readParam = reader.getDefaultReadParam();
//to implement
configureReadParam(sourceImageType, readParam);
BufferedImage img = reader.read(0,readParam);
//clean up
reader.dispose();
in.close(); The thing that needs to be implemented is the method I called configureReadParam. In this method you would check the color space, color model, and BufferedImage type of the supplied ImageTypeSpecifier and set a new ImageTypeSpecifier if need be. The method would essentially boil down to a series of if statements
1) If the image type specifier already uses a non-custom BufferedImage, then all is well and we don't need to do anything to the readParam
2) If the ColorSpace is gray then we create a new ImageTypeSpecifier based on a TYPE_BYTE_GRAY BufferedImage.
3) If the ColorSpace is gray, but the color model includes alpha, then we need to do the above and also call seSourceBands on the readParam to discard the alpha channel.
3) If the ColorSpace is RGB and the color model includes alpha, then we create a new ImageTypeSpecifier based on an ARGB BufferedImage.
4) If the ColorSpace if RGB and the color model doesn't include alpha, then we create a new ImageTypeSpecifier based on TYPE_3BYTE_BGR
5) If the ColorSpace is not Gray or RGB, then we do nothing to the readParam and ColorConvertOp the resulting image to an RGB image.
If this looks absolutely daunting to you, then go with the Toolkit approach mentioned first. -
Query execution takes long time
Hi All,
I have one critical problem in my production system.
I have three sales related queries in the production system and when i try to execute it in the BEx analyser(in Microsoft excle) it will take too much time and at last it will give am error that "Time Limit Exceeded" .
Actually we have created these three queries on one Infoset and that Infoset contains three DSOs and two master data.
Please give me the proper solution and help me to solve this production problem.Dear James,
first give some filter conditions on the query and try to restrict for lesser volume of data.from the message it is evident that may be you are trying to fetch large volume of data.so please execute the query once in RSRT and try to find the solution in that.there you can get all the statistics reagrding the query.if still you cant find please let me know the message you are getting in RSRT.then we can give a viable solution for that.
hope you might aware of all the options reagarding RSRT.
assign points if it helps..
Thanks & Regards,
Ashok. -
How to tune this smiple SQL (takes long time to come up with results)
the following SQL is very slow as it takes one day to complete...
select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code,
a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
PKG_AGE__TAX.GET_TAX_AMT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO) comm,
c.v_ird_region
FROM agent_master a, agent_lob b, agency_region c
WHERE a.n_agent_no = b.n_agent_no
AND a.v_agency_region = c.v_agency_region
--AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
--AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
--AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
c.v_ird_region
ORDER BY c.v_ird_region, a.v_agent_code DESC)
A
WHERE (COMM < :P_VAL_IND OR COMM >=:P_VAL_IND1);
. .it should return all the agents with commission based on the date parameter... data is less then 50 K inside all
the tables...
the version is Oracle9i Enterprise Edition Release 9.2.0.5.0
SQL> explain plan for
2 select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_
no, a.v_agent_type, a.v_company_code,
3 a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
4 BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO) com
m,
5 c.v_ird_region
6 FROM ammm_agent_master a, ammt_agent_lob b, gnlu_agency_region c
7 WHERE a.n_agent_no = b.n_agent_no
8 AND a.v_agency_region = c.v_agency_region
9 --AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
10 --AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
11 --AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
12 group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_ty
pe, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
13 BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
14 c.v_ird_region
15 ORDER BY c.v_ird_region, a.v_agent_code DESC)
16 A
17 WHERE (COMM < :P_VAL_IND OR COMM >=:P_VAL_IND1);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 13315 | 27M| | 859 (63)|
| 1 | VIEW | | 13315 | 27M| | |
| 2 | SORT GROUP BY | | 13315 | 936K| 2104K| 859 (63)|
| 3 | HASH JOIN | | 13315 | 936K| | 641 (81)|
| 4 | MERGE JOIN | | 3118 | 204K| | 512 (86)|
| 5 | TABLE ACCESS BY INDEX ROWID| AGENCY_REGION | 8 | 152 | | 3 (34)|
| 6 | INDEX FULL SCAN | SYS_C004994 | 8 | | | 2 (50)|
| 7 | SORT JOIN | | 3142 | 147K| | 510 (86)|
| 8 | TABLE ACCESS FULL | AGENT_MASTER | 3142 | 147K| | 506 (86)|
| 9 | TABLE ACCESS FULL | AGENT_LOB | 127K| 623K| | 102 (50)|
Note: PLAN_TABLE' is old version
17 rows selected.
..This is the only information i can get as i cannot access over database server (user security limitation)...
Thank YouTry to remove this:
ORDER BY c.v_ird_region, a.v_agent_code DESCOr move it to the end of entire query.
Edited by: Random on Jun 19, 2009 1:01 PM -
Takes long time to drpo tables with large numbers of partitions
11.2.0.3
This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process.
This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on.
anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds.
I can't change the build process. My only option is to try to make this run a little faster. I can't do anything about the hardware (lots of VMs crammed onto too few servers).
This is not a production issue. Its more of a hassle.Support Note 798586.1 shows that DROP USER CASCADE was slower than dropping individual objects -- at least in 10.2 Not sure if it still the case in 11.2
Hemant K Chitale -
Take long time to change server and log on
Hi,
I am useing vb.net and crystal report 8.5 for rpt
i am trying to load report in vb.net windows i useing SQL server 2005 exp. my code heare i is wroking but still its take 1 to 2 mintus to log on i any idia please help me
Dim myconnectioninfo As ConnectionInfo = New ConnectionInfo
Dim cs_os as new billout.rpt
crystalreportview.ReportSource = CS_OS
CS_OS.DataSourceConnections(0).SetConnection(server01, database, userxx, passxx)
ThanksPlease have a look at the Rules of Engagement before posting to these forums. The link is here:
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/rulesofEngagement
Once you have read the above, please provide the neccessary information.
BTW., since you are using .NET and CR for .NET code, you are not using CR 8.5. It may be that the reports were created in CR 8.5, but CR 8.5 SDK did not have the .NET SDK.
Ludek -
Why update query takes long time ?
Hello everyone;
My update query takes long time. In emp ( self testing) just having 2 records.
when i issue update query , it takes long time;
SQL> select * from emp;
EID ENAME EQUAL ESALARY ECITY EPERK ECONTACT_NO
2 rose mca 22000 calacutta 9999999999
1 sona msc 17280 pune 9999999999
Elapsed: 00:00:00.05
SQL> update emp set esalary=12000 where eid='1';
update emp set esalary=12000 where eid='1'
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:01:11.72
SQL> update emp set esalary=15000;
update emp set esalary=15000
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:02:22.27Hi BCV;
Thanks for your reply but it doesn't provide output, please see this.
SQL> update emp set esalary=15000;
........... Lock already occured.
>> trying to trace >>
SQL> select HOLDING_SESSION from dba_blockers;
HOLDING_SESSION
144
SQL> select sid , username, event from v$session where username='HR';
SID USERNAME EVENT
144 HR SQL*Net message from client
151 HR enq: TX - row lock contention
159 HR SQL*Net message from client
>> It does n 't provide clear output about transaction lock >>
SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
2 BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
3 request
4 FROM v$lock, v$session
5 WHERE v$lock.TYPE = 'TX'
6 AND v$lock.SID = v$session.SID
7 AND v$session.username = USER;
no rows selected
SQL> select MACHINE from v$session where sid = :sid;
SP2-0552: Bind variable "SID" not declared. -
Report execution takes long time
Dear all,
we have a report which takes long time to exceute due to select statement.. here is the code..
SELECT vkorg vtweg spart kunnr kunn2 AS division FROM knvp
INTO CORRESPONDING FIELDS OF TABLE hier
WHERE kunn2 IN s_kunnr
AND vkorg EQ '0001'
AND parvw EQ 'ZV'.
l_parvw = 'WE'.
SORT hier.
* select all invoices within the specified invoice creation dates.
CHECK NOT hier[] IS INITIAL.
SELECT vbrk~vbeln vbrk~fkart vbrk~waerk vbrk~vkorg vbrk~vtweg vbrk~spart vbrk~knumv
vbrk~konda vbrk~bzirk vbrk~pltyp vbrk~kunag vbrp~vbeln vbrp~aubel vbrp~posnr
vbrp~fkimg vbrp~matnr vbrp~prctr vbpa~kunnr
vbrp~pstyv vbrp~uepos
vbrp~kvgr4 vbrp~ean11
INTO CORRESPONDING FIELDS OF TABLE it_bill
FROM vbrk INNER JOIN vbrp ON vbrp~vbeln = vbrk~vbeln
INNER JOIN vbpa ON vbpa~vbeln = vbrk~vbeln
FOR ALL entries IN hier
WHERE (lt_syntax)
AND vbrk~vbeln IN s_vbeln
* AND vbrk~erdat IN r_period
AND vbrk~fkdat IN r_period
AND vbrk~rfbsk EQ 'C'
AND vbrk~vkorg EQ hier-vkorg
AND vbrk~vtweg EQ hier-vtweg
AND vbrk~spart EQ hier-spart.
Can anyone say about how to reduce the execution time.?
Edited by: Thomas Zloch on Sep 22, 2010 2:46 PM - please use code tagsHi
first of all never use move corressponding.
Rather you should declare a work area for table hier.
select values into the work area and then append that workarea into the table hier.
In case of for all entries include all the primary keys in the selection and for the keys which are of no use declare constants with initial valules like:
'prmkey' is a primary field for table 'tab1' .
Constants: field1 type tab1-prmkey value initial.
and then in your where condition write.
prmkey GE field1.
I hope it is clear to you now.
Thanks
lalit Gupta -
Reactivating Aggregates takes long time
Hi All,
Last week, we were in the process of removing some data from an Infocube, which lied outside its retention period (still attempting to setup an archive)...
Anyways, we would deactivate the aggregate on the cube, and then do a selective delete on several different date ranges. After the deletes are completed, we would rebuild or fill the aggregate. The first time it took around 4.5 hours to rebuild, which seems long. Then after deactivating the aggregate again, and doing more deletes on the same cube the day after next, we tried rebuilding the aggregate once more. This took around 7 hours to complete, with less data in the infocube.
Any ideas???Hi,
Several reasons to it. Something which I know of would be the following reasons.
1. The aggregate which you are deactiviating and filling is having the date field in it along with another date / time field which takes longer time for filling once adjusted with some deletion.
2. Check the size of the aggregates in terms of records being added everytime.
3. Under Manage Aggregates there is somehting called as "Aggregate Tree" under Goto -> Aggregate Tree. Here check the hiearchy of the aggregate you are filling. Go in the given sequence. As that will always be faster, regardless of how the aggregates are placed in the manage tab.
Hope these helps.
Thanks,
Pradip. -
Adobe form take long time for Check/Send at portal
Hi Experts
We have a form at Portal, which take long time when we click on Check to validate the form, its taking around 3 mins.
where as other forms are not taking this much time. Can anyone help us on this issue.
Thanks
SajalHi Sajal,
did you already contact your basis-guys to trace the performance of the ADS itself?
Sounds to me, that the connection of the portal is not that good and maybe this is one of the problems.
Also check on the interface to the form. what takes the time... the driver-program fetching the data or even the form itself.
Additional to that you should have a look inside the form and see how much scripting is inside. Sometimes there is a lot of unnecessary source inside and out of that, you didn't share the form (if it is a SAP-delivery) I cannot get more in detail with my answer here.
Hope it gives you a clue where to start with your journey.
~Florian
PS: If you use the search with keywords "ADS + trace" you find a lot of useful information. -
Select query takes long time....
Hi Experts,
I am using a select query in which inspection lot is in another table and order no. is in another table. this select query taking very long time, what is the problem in this query ? Pl. guide us.
select bPRUEFLOS bMBLNR bCPUDT aAUFNR amatnr aLGORT a~bwart
amenge aummat asgtxt axauto
into corresponding fields of table itab
*into table itab
from mseg as a inner join qamb as b
on amblnr = bmblnr
and azeile = bzeile
where b~PRUEFLOS in insp
and b~cpudt in date1
and b~typ = '3'
and a~bwart = '321'
and a~aufnr in aufnr1.
Yusufhi
instead of using 'move to corresponding of itab' fields use 'into table itab'.....
coz......if u use move to corresponding it will search for all the appropriate fields then it will place u r data........instead of that declare apprpiate internal table and use 'into table itab'.
and one more thing dont use joins ......coz joins will decrease u r performance .....so instead of that use 'for all entries' ....and mention all the key fields in where condition ........
ok
reward points for helpful answers
Maybe you are looking for
-
If your phone is broken how do you get the apple id off that phone?
if your iphone is broken how do you get the apple id off of it?
-
How do I get my bookmarks back on my iPad after installing iCloud?
I installed IOS 5 and did a backup. My bookmarks are gone. How do I get them back?
-
EXPORT AND IMPORT MEMORY COMMAND
Hi Experts, I AM FACING PROBLEM TO EXPORT VALUES FROM INTERNAL TABLE OF ONE TRANSACTION TO ANOTHER TRANSACTION AND IMPORT THOSE INTERNAL TABLE DATA. HOW TO RESOLVE THE ISSUE. THANKS BalaNarasimman.
-
PPro CS5 Playback through Intensity Pro HDMI Out?
I'm a long time FCP user and have been using the Intensity Pro card to provide HDMI for video monitoring. Can I do this with Premiere Pro CS5? I've updated the Intensity Pro drivers to the latest and selected Blackmagic Player in the PPro preferen
-
Re: Unable to Cancel Premium Subscription (Group v...
I can't either cancel my Skype Premium account! There is no way to cancel it or to find any answer. Please help me with a link that really works. Thanks.