How to improve my pls/sql performance tunning skills
Hi All , I would like to learn more about pl/sql performance tunning , where or how can i get more knowledge in this area ?
Is there any tutorials which can help me to understand the Explain plan, Dbms_Profiler, Dbms_Advisor more etc ........Thanks . Bcj
Explain plan
http://www.psoug.org/reference/explain_plan.html
DBMS_PROFILER (10g)
http://www.psoug.org/reference/dbms_profiler.html
DBMS_HPROF (11g)
http://www.psoug.org/reference/dbms_hprof.html
DBMS_ADVISOR
http://www.psoug.org/reference/dbms_advisor.html
DBMS_MONITOR
http://www.psoug.org/reference/dbms_monitor.html
DBMS_SUPPORT
http://www.psoug.org/reference/dbms_support.html
DBMS_TRACE
http://www.psoug.org/reference/dbms_trace.html
DBMS_SQLTUNE
http://www.psoug.org/reference/dbms_sqltune.html
Similar Messages
-
How to improve ojspc jsp compile performance?
Does anyone have any advice on how to improve the performance of the JSP pre-compilation utility (ojspc)?
We are using Oracle 10g OC4J containers.
Our situation is that we're attempting to add support for Oracle AS (we're currently on Weblogic), so I'm just getting started learning about it. In our development process we aim for sub-5 minutes clean builds, including recompilation of JSP files. Currently our 838 JSP pages take about 27 minutes to translate and compile using ojspc and jikes, but only a few minutes with Weblogic 8.1's jsp compiler.
Here are my initial experiences with ojspc:
* ojspc by default always translates JSPs, regardless of whether they've changed (that is, regardless of whether the .java and .class are up-to-date)? Is this really true? Is there an option to ensure it performs up-to-date checks?
* Also, it only supports batch compilation when your JSPs are packaged in a WAR? And even when doing this it extracts the entire WAR (which is 20Mb in our case) before starting. I couldn't find an option to make it recursively descend a JSP directory hierarchy and compile each JSP. In development we don't package as a WAR.
Here is what we've done to begin to speed things up:
* We wrote a wrapper to descend our exploded JSP tree to decide which JSPs need recompiled based on timestamp of generated .java and .class files, then invoke the ojspc compiler with the names of all those JSPs.
* We use ojspc with -noCompile to translate to .java only
* We then use Jikes to compile all the .java files
But at this point, its still a 27 minute process for 838 JPs. Previous experiences with other JSP compilers (HP Bluestone, previous-generation Weblogic) is that they are often slow because they re-parse each TLD file for every defined taglib in every JSP page. Does anyone know if this is true of ojspc?
Unfortunately we use a technique whereby every taglib is defined in every JSP page by a static include page to ensure consistency of prefix. So there are over a dozen taglib directives in each page, possibly resulting in over 1000 TLD parses.
Has anyone shared this experience or have any advice on speeding things up?
Thanks in advance,
TimHi,
We need more details. If you'll reply with the create table command and the query, we can give a better answer.
I would look for the following:
- Make sure you're doing a full scan of the table.
- Consider running the query in parallel (/*+ full (tab) parallel (tab 8) */) using a hint.
Since you are grouping the results, consider sorting in memory:
alter session set sort_area_size=XXX. Value depends on the table size and your hardware.
Let us know how it goes, and additional hardware details.
Idan. -
How will do the SQL PERFORMANCE TUNNING.
hi to all:
i have a query like example: select col1,col2,col3,col4,col5,col6,col7,col8,
col9,col0,col11,col12,col13,col14,col15 from table_name where col1=100 like that but data contain more like some lack wise reords.this query it's taking so much of time .i want to do the performance kindly give me solution to me...
anbarasan.select dai.GSI_PARTY_ID ,dai.GSI_PARTY_NUM ,dai.CERTIFICATION_LVL,dai.NAME ,dai.DUNS_NUM ,dai.CITY ,dai.STATE ,dai.ZIPCODE ,dai.COUNTRY,dai.ROW_WID,dai.GEO_WID,dai.ACCNT_GEOSTATE_WID,dai.W_CUSTOMER_CLASS,dai.MGR_NAME,dai.MAIN_PH_NUM,dai.ST_ADDRESS ,dai.ACCNT_FLG ,dai.ACCNT_LOC,dai.ACCNT_REVN,dai.ACTIVE_FLG,dai.ACCNT_STATUS,dai.ACCNT_STATUS_I,dai.ACCNT_TYPE_CD,dai.ACCNT_TYPE_CD_I,dai.CUST_TYPE_CODE,dai.CUST_TYPE_NAME,dai.ANNUAL_REVN_CAT,dai.ANNUAL_REVN_CAT_I,dai.BASE_CURCY_CD,dai.BU_NAME,dai.CHANNEL_FLG,dai.CHNL_ANNL_SALES,dai.CHNL_SALES_GRWTH,dai.CHNL_SALES_GRWTH_I,dai.COMPETITOR_FLG,dai.CREATED_DT,dai.DIVN_FLG,dai.DIVN_TYPE_CD,dai.DIVN_TYPE_CD_I,dai.DOMESTIC_ULTIMATE_DUNS_NUM,dai.EMP_COUNT,dai.EXPERTISE,dai.EXPERTISE_I,dai.FORMED_DT,dai.FREQUENCY_CAT,dai.FREQUENCY_CAT_I,dai.FRGHT_TERMS_CD,dai.FRGHT_TERMS_CD_I,dai.GLOBAL_DUNS_NUM,dai.HIST_SLS_VOL,dai.LINE_OF_BUSINESS,dai.MONETARY_CAT,dai.MONETARY_CAT_I,dai.NUM_EMPLOY_CAT,dai.NUM_EMPLOY_CAT_I,dai.ORG_NAME,dai.ORG_MGR_NAME,dai.ORG_MAIN_PH_NUM,dai.ORG_ST_ADDRESS,dai.ORG_CITY,dai.ORG_STATE,dai.ORG_ZIPCODE,dai.ORG_TERR_NAME,dai.ORG_COUNTRY,dai.ORG_FLG,dai.ORG_PRTNR_FLG,dai.ORG_PRTNR_TIER,dai.ORG_PRTNR_TIER_I,dai.ORG_PRTNR_TYPE,dai.ORG_PRTNR_TYPE_I,dai.PARENT_DUNS_NUM,dai.PAR_INTEGRATION_ID,dai.PAR_ORG_NAME,dai.PRI_LST_NAME,dai.PROSPECT_FLG,dai.PRTNRSHP_START_DT,dai.PRTNR_FLG,dai.PRTNR_NAME,dai.PRTNR_SALES_RANK,dai.PR_COMPETITOR,dai.PR_INDUST_NAME,dai.PR_ORG_TRGT_MKT,dai.PR_PTSHP_MKTSEG,dai.PTNTL_SLS_VOL,dai.PTSHP_END_DT,dai.PTSHP_FEE_PAID_FLG,dai.PTSHP_PRTNR_ACCNT,dai.PTSHP_RENEWAL_DT,dai.PTSHP_SAT_INDEX,dai.PTSHP_STAGE,dai.PTSHP_STAGE_I,dai.PUBLIC_LISTING_FLG,dai.RECENCY_CAT,dai.RECENCY_CAT_I,dai.REGION,dai.REGION_I,dai.REVN_GROWTH_CAT,dai.REVN_GROWTH_CAT_I,dai.SALES_EMP_CNT,dai.SERVICE_EMP_CNT,dai.U_ACCNT_RVN,dai.U_ACNTRVN_CURCY_CD,dai.U_ACNTRVN_EXCH_DT,dai.U_CHNL_ANNL_SLS,dai.U_CH_ASLS_CURCY_CD,dai.U_CH_ASLS_EXCH_DT,dai.U_HIST_SLS_VOL,dai.U_HST_SLS_CURCY_CD,dai.U_HST_SLS_EXCH_DT,dai.U_PTL_SLS_CURCY_CD,dai.U_PTL_SLS_EXCH_DT,dai.U_PTL_SLS_VOL,dai.VIS_PR_BU_ID,dai.VIS_PR_POS_ID,dai.VIS_PR_POSTN_DH_WID,dai.ACCNT_AHA_NUM,dai.ACCNT_CLASS,dai.ACCNT_CLASS_I,dai.ACCNT_HIN_NUM,dai.ACCNT_REGION,dai.ACCNT_VALUE,dai.ACCNT_VALUE_I,dai.AGENCY_FLG,dai.AGNC_CONTRACT_DT,dai.ANNUAL_REVENUE,dai.BOOK_VALUE,dai.BRANCH_FLG,dai.CALL_FREQUENCY,dai.CLIENT_FLG,dai.CREDIT_SCORE,dai.CRIME_TYPE_CD,dai.CRIME_TYPE_CD_I,dai.CURR_ASSET,dai.CURR_LIAB,dai.CUST_END_DT,dai.CUST_SINCE_DT,dai.CUST_STATUS_CODE,dai.DIVIDEND,dai.EXCHANGE_LOC,dai.FACILITY_FLG,dai.FACILITY_TYPE,dai.FACILITY_TYPE_I,dai.FIFTYTWO_HIGH,dai.FIFTYTWO_LOW,dai.FIN_METHOD,dai.FIN_METHOD_I,dai.GROSS_PROFIT,dai.GROWTH_HORIZ,dai.GROWTH_HORIZ_I,dai.GROWTH_OBJ,dai.GROWTH_OBJ_I,dai.GROWTH_PERCNTG,dai.IDENTIFIED_DT,dai.INVESTOR_FLG,dai.KEY_COMPETITOR,dai.LEADER_NAME,dai.LEGAL_FORM,dai.LEGAL_FORM_I,dai.LOYAL_SCORE1,dai.LOYAL_SCORE2,dai.LOYAL_SCORE3,dai.LOYAL_SCORE4,dai.LOYAL_SCORE5,dai.LOYAL_SCORE6,dai.LOYAL_SCORE7,dai.MARGIN_VS_INDUST,dai.MARGIN_VS_INDUST_I,dai.MARKET_CLASS,dai.MARKET_CLASS_I,dai.MARKET_TYPE,dai.MARKET_TYPE_I,dai.MED_PROC,dai.MEMBER_NUM,dai.MKT_POTENTIAL,dai.MRKT_CAP_PREF,dai.MRKT_CAP_PREF_I,dai.NET_INCOME,dai.NON_CASH_EXP,dai.NUMB_OF_BEDS,dai.NUM_PROD,dai.NUM_PROD_EFF_DT,dai.OBJECTIVE,dai.OBJECTIVE_I,dai.OPER_INCOME,dai.PERSIST_RATIO,dai.PRIM_MARKET,dai.PRIM_MARKET_I,dai.PROJ_EPS,dai.PR_SPEC_NAME,dai.PR_SYN_ID,dai.QUICK_RATIO,dai.SHARE_OUTST,dai.SRV_PROVDR_FLG,dai.STAT_REASON_CD,dai.STAT_REASON_CD_I,dai.TICKER,dai.TOTAL_DEBT,dai.TOTAL_NET_WORTH,dai.TOT_ASSET,dai.TOT_LIABILITY,dai.TRAIL_EPS,dai.U_EXCH_DT,dai.U_LOYAL_SCORE5,dai.VOLUME_TR,dai.X_NUM_PROD,dai.BIRTH_DT_WID,dai.NO_OF_CHILDREN,dai.LEGAL_NAME,dai.FAMILY_NAME,dai.OTHER_NAME,dai.PREFERRED_NAME,dai.INDV_TITLE,dai.INDV_MARITAL_STATE,dai.INDV_GENDER,dai.EMAIL_ADDRESS,dai.FAX_NUM,dai.PAGER_NUM,dai.MOBILE_NUM,dai.CUST_CAT_CODE,dai.CUST_CAT_NAME,dai.SIC_CODE,dai.SIC_NAME,dai.GOVT_ID_TYPE,dai.GOVT_ID_VALUE,dai.DUNNS_SITE_NAME,dai.DUNNS_GLOBAL_NAME,dai.DUNNS_LEGAL_NAME,dai.CUSTOMER_NUM,dai.ALT_CUSTOMER_NUM,dai.ALT_PHONE_NUM,dai.INTERNET_HOME_PAGE,dai.LEGAL_STRUCT_CODE,dai.LEGAL_STRUCT_NAME,dai.DIRECT_MKTG_FLG,dai.SOLICITATION_FLG,dai.CREATED_BY_WID,dai.CHANGED_BY_WID,dai.CREATED_ON_DT,dai.CHANGED_ON_DT,dai.AUX1_CHANGED_ON_DT,dai.AUX2_CHANGED_ON_DT,dai.AUX3_CHANGED_ON_DT,dai.AUX4_CHANGED_ON_DT,dai.SRC_EFF_FROM_DT,dai.SRC_EFF_TO_DT,dai.EFFECTIVE_FROM_DT,dai.EFFECTIVE_TO_DT,dai.DELETE_FLG,dai.CURRENT_FLG,dai.W_INSERT_DT,dai.W_UPDATE_DT,dai.DATASOURCE_NUM_ID,dai.ETL_PROC_WID,dai.INTEGRATION_ID,dai.TENANT_ID,dai.X_CUSTOM,dai.X_ADDR_CLEANSE_RESULT,dai.X_PR_INDUSTRY_ID,dai.X_PTNR_ACC_LVL_CD,dai.OPN_PIN_NUM,dai.X_ALIAS_NAME,dai.X_URL,dai.X_ACCNT_NAME_LOCAL,dai.X_ADDR_LINE_2,dai.X_ADDR_LINE_3,dai.X_COUNTY,dai.X_FAX_PH_NUM,dai.X_CONTRACT_VIS_FLG,dai.INDUSTRY_ONEVOICE,dai.X_ANNUAL_REVENUE,dai.X_NUM_EMPLOYEES,dai.INDUSTRY_ONEVOICE_SEGMENT,dai.X_GCD_ORG_ID,dai.X_MKTG_CREATED_FLG,dai.X_PR_ASGN_TYPE,dai.X_ADDR_DUNS,dai.X_ADDR_STATUS_CD,dai.X_MKTG_REGION,dai.X_MKTG_SUBREGION,dai.X_TRNSLT_CITY,dai.X_TRNSLT_STATE,dai.X_TRNSLT_POSTALCODE,dai.X_TRNSLT_LANG,dai.X_TRNSLT_ADDR1,dai.X_TRNSLT_ADDR2,dai.X_TRNSLT_ADDR3,dai.X_QUALITYSCORE,dai.X_SOURCE_TYPE_CD,dai.X_SIC_CODE,dai.X_PROVINCE,dai.X_GOVT_TYPE from dimple.gcm_account dai where 1=1 and dai.gsi_party_id=1017247578
this is my compleate quary ..pls give me solution.. -
How to improve database and application performance
Hi,
Any body please help me out that how can we improve the database and Application performance.
Regards,
Bhatiabhatia wrote:
Hi,
Any body please help me out that how can we improve the database and Application performance.
Regards,
Bhatiathere is no simple answer. There is no DATABASE_FAST=TRUE initialization parameter. There are a myriad of reasons why an application is performing poorly. It could be that the application (code and data relationships) is poorly designed. It could be that individual SQL statements are poorly written. It could be that you don't have enough cpu/memory/disk bandwidth/network bandwidth.
You need to determine the root cause of poor performance and address it. If you application is poorly designed, you can tune the database until the cows come home and it won't make any difference. If you are trying to run 100k updates per second against a database hosted on hardware that only meets minimal requirements to install Oracle ... well, hopefully you get the picture.
First, go to tahiti.oracle.com. Drill down to your selected Oracle product and version. There you will find the complete doc library. Find the Performance Tuning Guide
Second, go to amazon.com and browse titles by Tom Kyte and Cary Milsap. I particularly recommend "Effective Oracle by Design" and "Optimizing Oracle Performance", though I see a lot of new titles that look promising (I think I'll be doing some buying!) -
How to Improve this PL/SQL script
Hi All,
I have a package/procedure that insert data into table but it takes time more than 2 days. Have any known how to modify this script to improve the performance and reduce loading time, The following code is procedure I used to insert data.....
Procedure INSERT_DATA (p_month IN DATE, p_product_id IN NUMBER ) IS
cursor c1 is select * from tab#1; --reference data
cursor c2 is select * from tab#2;
cursor c3 is select * from tab#3;
cursor c4 is select * from tab#4;
cursor c5 is select * from tab#5;
v_rec claim_table%rowtype;
Begin
for c1rec in c1 loop
exit when c1%notfound;
call procedure in package....;
open c2(c1rec.claim_no);
fetch c2 into v_location_cd ,v_claim_type_cd ;
close c2;
v_rec.location_cd := v_location_cd;
v_rec.claim_type_cd := v_claim_type_cd ;
open c3(c1rec.claim_no);
fetch c3 into v_col#3,v_col#4;
close c3;
v_rec.col#3 := v_col#3 ;
v_rec.col#4 := v_col#4 ;
open c4(c1rec.claim_no);
fetch c4 into v_col#5,v_col#6;
close c4;
v_rec.col#5 := v_col#5 ;
v_rec.col#6 := v_col#6 ;
insert into claim_table values ( v_rec.location_cd, v_rec.claim_type_cd , v_rec.col#3 , .......) ;
if (c1%rowcount/1000) = trunc(c1%rowcount /1000) then
commit;
end if;
end loop;
commit;
Exception
exception statement....
commit;
End;
Thanks All,
MckaA copy and paste of a reply I posted just a hour or so ago to the exact same approach used by a poster in [url http://forums.oracle.com/forums/thread.jspa?threadID=636929]this thread.
Yucky code IMO. You are using PL/SQL to drive a nested loop join. Why? SQL is by far more capable of joining tables and is not limited to a using a nested loop approach only.
Also, as you are using PL/SQL to drive it, it means that each cursor fetch in each (nested) FOR loop is a slow context switch from PL/SQL to SQL, in order to ship the row's data from the SQL engine to the PL/SQL engine. Only then to have that very same data shipped back (via yet another context switch) to the SQL engine to be inserted into table4.
This code violates one of the most rudimentary performance principles in Oracle - it is not maximizing SQL and it it not minimizing PL/SQL. It is doing the exact opposite.
As for the ad-hoc commits inside the loop - this does not make much sense. This will generate more redo and more overheads. Also, when something goes pear shape in that process, some rows would have been processed and inserted and committed. Some not.
In this case, how do you expect to restart the failed process at the very first row that failed to be committed and continue processing from there? -
Ps/sql performance tunning
Hi
Can any body tell me how to tune pl/sql code
tnr
nageshBook 'Oracle PL/SQL Tips and Techniques' by TUSC guys is good one.
First thing is to use proving coding stretagies, which is key to gain performance even without tuning; then using proper tuning techniques. Tuning is no more than a polishing, if it is a dent, polishing will not solve it. -
How to improve mac book pro performance
How can i improve the performance of my mac book pro? I hate the rainbow.
Some times it gets very slow! Can anyone help with some tips.
Ps I already ordered an extra memory ram.You need to provide more information. When do you see the rainbow? Applications you have running? What OS is the system running? Open Activity Monitor from teh Utilities folder, do you have any application using a lot of CPU?
-
SQL Performance tunning on RAC environment
Hi,
I am working on oracle 11.2.0.2.0 with 2 node RAC.
We have a batch process with runs everday and comsumes most of the time in cluster waits. Could someone please suggest what SQL / PL SQL changes / Hints are available to tune this cluster waits..
Also, if possible please provide link where can find more information on this.
ThanksIs your batch process multi-threaded or single-threaded?
If multi-threaded, and each thread is doing similar work, running the same code, and if your connections are load balanced across both nodes, it won't take long to end up waiting on cluster interconnect activity. You might try having the connections related to this batch job all go to one node.
In general, utilizing both nodes at the same time shouldn't be a problem, unless you have code that accesses the same segments on both nodes at the same time. That's when you'll see the worst problems in terms of cluster-related waits.
-Mark -
Sql performance tunning user guides
Hi
Any One Have sql performence usere guids and best optimal how to use explain plan satement .
thank'sMaybe you should also have a look to Cost-Based Oracle Fundamentals, from Jonathan Lewis (at Apress), which is a bit more complex but really great on SQL tuning.
-
Hi I am using Oracle 9i, i am having SQL Query like this which is taking mote time to execute , is there any alternate way to modify the same query with Rank and Partition by concepts? The tables(dw_csc_site_mappings smo, dw_csc_companies cco) having 14 million rows approx.
SELECT /*+ RULE */
smo.company_target_id, smo.customer_id
FROM dw_csc_site_mappings smo, dw_csc_companies cco
WHERE cco.company_target_id = smo.company_target_id
AND cco.global_target_id IS NOT NULL
GROUP BY smo.company_target_id, smo.customer_id
HAVING COUNT (*) =
(SELECT MAX (COUNT (*))
FROM dw_csc_site_mappings smi, dw_csc_companies cci
WHERE smo.customer_id = smi.customer_id
AND cci.company_target_id = smi.company_target_id
AND cci.global_target_id IS NOT NULL
GROUP BY smi.company_target_id, smi.customer_id)
ORDER BY 1 ASCAnalytics might help.
You should try to find a way to access those big tables only once instead of twice.
untested try
select company_target_id, customer_id, cnt
from (
SELECT smo.company_target_id, smo.customer_id,
count(*) cnt, max(count(*)) over (partition by smo.company_target_id, smo.customer_id order by count(*)) max_cnt
FROM dw_csc_site_mappings smo, dw_csc_companies cco
WHERE cco.company_target_id = smo.company_target_id
AND cco.global_target_id IS NOT NULL
GROUP BY smo.company_target_id, smo.customer_id
) v
WHERE v.cnt = v.max_cnt
ORDER BY 1 ASC -
Cannot Register for "Ask The Experts How to Improve Your PC's Performance"
I'm trying to register for this event, but everytime I click the Signing Up link it takes me to "HP Passport New User Registration". I enter my information, then it says I am already a user, try signing in. I am signed in, obviously since I'm posting this. Can anyone help as I would like to register for this event.
Thanks!
Susan
HP Pavilion a6300f PC
Windows Vista - Home PremiumHi Susan,
You don't have to do anything to sign up for the event. Since you already are a member of the Forum, you wil be able to ask questions. Just be sure to sign into the Forum before you click on the link that will appear right before the chat event is going to start.
Let me know if you have any more questions.
Look forward to seeing you on June 1st.
Siobhan
I work for HP, supporting the HP Experts who volunteer their time and technical knowledge to help others. -
How to Improve Report View performance
Hi All, i have a webi report which runs about 3 minutes. But when i click on view the report takes about 21 seconds(average) or so to open up. Any ideas on how to improve the report view performance? Does it have anything to do with server load? Any server settings to tweak to speed it up? Any ideas are appreciated.
The requirement is that my web team has to strip off the Business Objects logo etc(using sdk), and display the report in my company web page, so its
looking sort of ugly as the web page is taking about 21 seconds just to display the report.
Some Report statistics:
Report size is about 90 MB, as it has about 300 k rows of data(which i am aggregating using formulas)
Report has about 15 simple division formulas
Report is in Drill Mode. There are about 5 drill filters
Thanks,
KonHi Larry,
I'll assume you are scheduling this report and viewing the instance in ~21 seconds. Is that correct?
We definitely need some environment info to go along with this post. Like Simone said, Product Version, Patch Level, and other OS, Hardware, App Server details would help as well.
There are certain properties of a document that can slow down the rendering of a report but we generally have to look at the logs to determine what part of the report is taking the longest time to process. Assuming this is an instance, I would be curious to know if it is quicker to come up if you immediately view it a second time?
If you were to turn on a trace, you would see a number of lines like this:
2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut
2011/06/15 20:11:54.153|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut: 0
2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut
2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut: 0.015
2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:QTDP_StreamUnit_SerializeOut: 0.015
2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveMe_Serial: 0.015
2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveAll_Serial: 0.015
The numbers at the end are how long the function took to run. Generally the function gives us an idea of what the engine was doing.
When evaluating performance issues, you can occasionally find a function that is taking long to run within the logs and based on the function and module names, it can sometime lead you to the reason it is taking longer than expected.
Another good test might be to run a very basic report to see how long it takes to come up. Even a report without a datasource would suffice as that will give you your baseline time on how long it takes to load the viewer, convert the WID file to XML and send it up through the application server to your browser. If a test report takes 15 seconds to view, then you are really only looking at 6 seconds for this other report.
Hope this helps and gets you started. More environment info would help take it further.
Thanks
Jb -
Learn how to improve your PC 's performance on June 1st from 3:30 -4:30 pm PDT. We'll have a team of experts available to answer your questions.
When it comes to performance, your PC is similar to your car. Both need to be cared for to keep them running run well. But unlike your car, you don’t need to bring your PC into a shop for a tune up. You can easily do it yourself if you know the right steps to take. Our experts will answer your questions and provide tips on how to make your PC run better. Topics that may be covered in this real-time chat event include the following:
How to customize your PC to increase performance;
How to prolong your notebook’s battery life;
How to choose the right video card, power supply, or add the right amount of memory; or
How to use the tools built into your PC that can make it run better and fix common problems.
While you can attend this real-time chat event without signing up in advance, you must be a member of the HP Support Forums to ask questions. Signing up is easy and only takes a few moments, plus it will allow you to post questions or give answers on the Forums.
And it is all free!
So, come and learn how to get the most out of your PC. Please be sure to come on time as space is limited!
Message Edited by timhsu on 05-12-2009 05:33 PM
I work for HP, supporting the HP Experts who volunteer their time and technical knowledge to help others.
This question was solved.
View Solution.Here is the transcript of the chat event on improving PC performance.
Please note that I have altered the transcript so that follow up questions are included in the logical order.
I am in the process of planning the next chat event. I would love to hear what topics would interest you, what day of the week and time is best for you, and if you think an hour is too long.
So, if you get a minute, please let me know.
I work for HP, supporting the HP Experts who volunteer their time and technical knowledge to help others. -
HOW TO IMPROVE PERFORMANCE ON SUM FUNCTION IN INLINE SQL QUERY
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advance989209 wrote:
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advancea) Lac is not an international unit, so is not understood by everyone. This is an international forum so please use international units.
b) Please read the FAQ: {message:id=9360002} to learn how to format your question correctly for people to help you.
c) As your question relates to performance tuning, please also read the two threads linked to in the FAQ: {message:id=9360003} for an idea of what specific information you need to provide for people to help you tune your query. -
How can I improve below SQL performance.
Hi,
How can I improve below SQL performance. This SQL consumes CPU and occures wait events. It is running every 10 seconds. When I look at the session information from Enterprise Manager I can see that "Histogram for Wait Event: PX Deq Credit: send blkd"
I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index.
SELECT i.ID
FROM EXPRESS.invoices i
WHERE i.nbr IS NOT NULL
AND i.EXTRACT_BATCH IS NULL
AND i.SUB_TYPE='COD'
Explain Plan from Toad
SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349
4 PX COORDINATOR
3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349
2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349
1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
Execution Plan from Sqlplus
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
| 1 | PX COORDINATOR | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
|* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
Note
- 'PLAN_TABLE' is old version
- cpu costing is off (consider enabling it)
Statistics
141 recursive calls
0 db block gets
5568 consistent gets
0 physical reads
0 redo size
319 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00
Redo NoWait %: 100.00
Buffer Hit %: 99.70
In-memory Sort %: 100.00
Library Hit %: 99.81
Soft Parse %: 99.77
Execute to Parse %: 63.56
Latch Hit %: 90.07
Parse CPU to Parse Elapsd %: 0.81
% Non-Parse CPU: 98.88
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
latch: library cache 12,626 16,757 1,327 62.6 Concurrency
CPU time 5,712 21.Mar
latch: session allocation 1,848,987 1,99 1 07.Nis Other
PX Deq Credit: send blkd 1,242,265 981 1 03.Tem Other
PX qref latch 1,405,819 726 1 02.Tem Other
The database version is 10.2.0.1 but we haven't installed the patch 10.2.0.5. yet.
I am waiting your comments.
Thanks in advanceWelcome to the forum.
I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index. What columns are indexed?
And what do:
select i.sub_type
, count(*)
from express.invoices i
where i.nbr is not null
and i.extract_batch is null
group by i.sub_type; and
select i.sub_type
, count(*)
from express.invoices i
group by i.sub_type; return?
Also, try use the {noformat}{noformat} tag when posting examples/execution plans etc.
See: HOW TO: Post a SQL statement tuning request - template posting for more tuning instructions.
It'll make a big difference:
SELECT i.ID
FROM EXPRESS.invoices i
WHERE i.nbr IS NOT NULL
AND i.EXTRACT_BATCH IS NULL
AND i.SUB_TYPE='COD'
Explain Plan from Toad
SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349
4 PX COORDINATOR
3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349
2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349
1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
Execution Plan from Sqlplus
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
| 1 | PX COORDINATOR | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
|* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
Note
- 'PLAN_TABLE' is old version
- cpu costing is off (consider enabling it)
Statistics
141 recursive calls
0 db block gets
5568 consistent gets
0 physical reads
0 redo size
319 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00
Redo NoWait %: 100.00
Buffer Hit %: 99.70
In-memory Sort %: 100.00
Library Hit %: 99.81
Soft Parse %: 99.77
Execute to Parse %: 63.56
Latch Hit %: 90.07
Parse CPU to Parse Elapsd %: 0.81
% Non-Parse CPU: 98.88
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
latch: library cache 12,626 16,757 1,327 62.6 Concurrency
CPU time 5,712 21.Mar
latch: session allocation 1,848,987 1,99 1 07.Nis Other
PX Deq Credit: send blkd 1,242,265 981 1 03.Tem Other
PX qref latch 1,405,819 726 1 02.Tem Other
Maybe you are looking for
-
After ipod 4g went dead it will no longer turn on
after my ipod went dead i left in sitting for about a week and i tried charging after that and it wont even turn on. tried eveything. the comp wont even recognize it.
-
Call transaction from RFC?
Hi Experts, I need to call Different transactions from a RFC. I am looking for different approaches to achieve this. One Idea is RFC will take input parameters as -Transaction name -Field names to be updated with its values (as a table) Now in RFC, i
-
We would like an request with error is not imported over and over again.
We have an periodically sheduled job which imports all released requests(queue). We would like that request with error is not imported over and over again. How to achieve that?
-
when i try to open my HD or any of the folders in it, like applications or utilities the icon expands like it is going to open, but nothing happens. any help? thanks
-
'''bold text'''