Need alternatives to reduce Query execution time
Hi All,
The following are my DB details:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
I have the following block of code that taking 5-10 minutes on an average - the stored proc containing this block has 3 other such blocks and each round execution is thus taking close to 30 minutes.
While the stored proc is expected to execute around 15 times an hour, the slowness is resulting in a maximum of 2-3 executions and placing the entire module in disarray. Can someone please guide me to tune this to execute faster?
From the STG_SEARCH_RESULT table, data is obtained based on ln_file_no value -- each ln_file_no maps to around 1000-5000 records in the table.
DELETE FROM ods_temp_searchprocessing;
FOR c IN
(SELECT
searchid ,
Regexp_substr(partcodes, '[^|]*', 1, no) partcodes,
requestdatetime
FROM
stg_search_result,
(SELECT
LEVEL no
FROM
dual
CONNECT BY LEVEL <=
(SELECT
( ( 2 * LEN ) + 1 )
FROM
(SELECT
MAX(LENGTH(partcodes ) - LENGTH(REPLACE( partcodes, '|')) ) LEN
FROM
stg_search_result
WHERE
apitransname IS NOT NULL
WHERE
Regexp_substr(partcodes, '[^|]*', 1, no) IS NOT NULL
AND filenumber = ln_file_no
AND is_valid = 'Y'
LOOP
INSERT
INTO
ods_tmp_searchprocessing
SEARCHID,
CODE ,
REQUESTDATETIME
VALUES
c.searchid ,
c.partcodes,
( To_timestamp(c.requestdatetime, 'yyyymmddhh24:mi:ss.ff4') )
END LOOP;
I later on use the ODS_TMP_SEARCHPROCESSING table to feed data into another big transactional table.
Here are the table structures for STG_SEARCH_RESULT and ODS_TMP_SEARCHPROCESSING:
STG_SEARCH_RESULT has an index on FILENUMBER column:
DESC STG_SEARCH_RESULT
Name Null Type
SEARCHID NUMBER(20)
CEID NUMBER(19)
USERID VARCHAR2(15)
NAMESPACE CHAR(2)
CTC VARCHAR2(50)
APC VARCHAR2(50)
SESSIONID VARCHAR2(400)
REQUESTDATETIME VARCHAR2(19)
LANGCODE VARCHAR2(10)
DESTPRODCODE VARCHAR2(10)
DEVICECODE VARCHAR2(10)
USRAGENT VARCHAR2(255)
BWSRNAME VARCHAR2(30)
BWSRVER VARCHAR2(100)
BWSRPLATFORM VARCHAR2(30)
SRCH_PAGE VARCHAR2(10)
CONT_IND VARCHAR2(5)
SAVED_FORM NUMBER(2)
FREETXT_INCL NUMBER(2)
CNCPTEXP_INCL NUMBER(2)
SRCSLIST_INCL VARCHAR2(5)
COMPSLIST_INCL VARCHAR2(5)
SUBS_INCL NUMBER(2)
INDS_INCL NUMBER(2)
REGNS_INCL NUMBER(2)
LNGS_INCL NUMBER(2)
SRCH_DTRNG VARCHAR2(10)
DEDUP_STNG VARCHAR2(10)
SRCHFREETXTIN VARCHAR2(5)
REPUBNWS_EXL NUMBER(2)
RECPRMKTDATA_EXL NUMBER(2)
ORBSPRTCAL_EXL NUMBER(2)
LEADSNTNCDISP VARCHAR2(5)
RSLTSORTORDER VARCHAR2(50)
PERSORGRP VARCHAR2(2)
SHARETYPE VARCHAR2(10)
HDLNDISP_REQ VARCHAR2(10)
DFILT_DATE NUMBER(2)
DFILT_COMPS NUMBER(2)
DFILT_SRCS NUMBER(2)
DFILT_SUBS NUMBER(2)
DFILT_INDS NUMBER(2)
DFILT_NWSCLST NUMBER(2)
DFILT_KWRD NUMBER(2)
DFILT_EXEC NUMBER(2)
SUCCESS_IND VARCHAR2(2)
ERRORCODE VARCHAR2(20)
TOTALHDLNSFND NUMBER(10)
UNQHDLNVWD NUMBER(10)
DUPHDLN NUMBER(10)
ENTRY_CREATEDMONTH VARCHAR2(8)
FILENUMBER NUMBER(20)
LINENUMBER NUMBER(20)
ENTRY_CREATEDDATE DATE
ACCTNUM VARCHAR2(15)
AUTHLKP_INCL NUMBER(2)
DFILT_AUTH NUMBER(2)
USECONSLENS NUMBER(2)
IS_VALID VARCHAR2(2)
AUTOCOMPLETEDTERM NUMBER(2)
RESPONSEDATETIME VARCHAR2(19)
APICONSUMERNAME VARCHAR2(255)
APICONSUMERDETAILS VARCHAR2(500)
APIVERSION VARCHAR2(25)
APICONSUMERVERSION VARCHAR2(250)
APITRANSNAME VARCHAR2(50)
ADDNLORGNDATA VARCHAR2(250)
SRCHMODE VARCHAR2(30)
SRCGENRE VARCHAR2(1000)
PARTCODES VARCHAR2(1000)
SEARCHLANGCODE VARCHAR2(250)
SNIPPETTYPE VARCHAR2(20)
BLACKLISTKEYWRDS VARCHAR2(1000)
DAYSRANGE VARCHAR2(50)
RSLTSOFFSET VARCHAR2(10)
RESPONSEFORMAT VARCHAR2(50)
REQUESTORIP VARCHAR2(50)
MODIFIEDSRCHIND VARCHAR2(1)
DIDYOUMEANUSAGE VARCHAR2(500)
USERINITIATEDIND VARCHAR2(1)
SRCHDTRNG_STARTDATE DATE
SRCHDTRNG_ENDDATE DATE
FILTER_FREETEXTTERMS VARCHAR2(4000)
FILTER_COMPANYCODES VARCHAR2(4000)
FILTER_INDUSTRYCODES VARCHAR2(4000)
FILTER_REGIONCODES VARCHAR2(4000)
FILTER_SUBJECTCODES VARCHAR2(4000)
FILTER_SOURCECODES VARCHAR2(4000)
FILTER_LANGUAGECODES VARCHAR2(4000)
FILTER_AUTHORS VARCHAR2(4000)
FILTER_EXECUTIVES VARCHAR2(4000)
FILTER_ACCESSIONNUMS VARCHAR2(4000)
WORDCNTUSEDIND VARCHAR2(2)
CNT_ANDOPERATOR NUMBER(4)
CNT_OROPERATOR NUMBER(4)
CNT_NOTOPERATOR NUMBER(4)
CNT_SAMEOPERATOR NUMBER(4)
CNT_FIRSTOPERATOR NUMBER(4)
CNT_ATLEASTOPERATOR NUMBER(4)
CNT_PHRASEOPERATOR NUMBER(4)
CNT_WITHINOPERATOR NUMBER(4)
CNT_NEAROPERATOR NUMBER(4)
CNT_WILDCARDOPERATOR NUMBER(4)
TOTALSRCHTRANSACTIONTIME NUMBER(10,5)
SEARCHQUERYSTRING VARCHAR2(4000)
SRCHPAGE_ADDNLDTL VARCHAR2(10)
CNT_ADDFREETXT_SRC NUMBER(4)
CNT_ADDFREETXT_AUTH NUMBER(4)
CNT_ADDFREETXT_COMP NUMBER(4)
CNT_ADDFREETXT_SUBJ NUMBER(4)
CNT_ADDFREETXT_INDS NUMBER(4)
CNT_ADDFREETXT_RGNS NUMBER(4)
CNT_FREETEXTTERMS NUMBER(4)
CNT_COMPANYCODES NUMBER(4)
CNT_INDUSTRYCODES NUMBER(4)
CNT_REGIONCODES NUMBER(4)
CNT_SUBJECTCODES NUMBER(4)
CNT_SOURCECODES NUMBER(4)
CNT_LANGUAGECODES NUMBER(4)
CNT_AUTHORS NUMBER(4)
CNT_EXECUTIVES NUMBER(4)
CNT_ACCESSIONNUMS NUMBER(4)
TERMCOUNT NUMBER(4)
EPVALUE VARCHAR2(3000)
PRIMARYSRCH_IND VARCHAR2(2)
APCU VARCHAR2(20)
DFILT_FLATRGNS_NAVIGATOR NUMBER(2)
CNT_SUBSINCL_INSB NUMBER(5)
CNT_INDSINCL_INSB NUMBER(5)
CNT_REGNSINCL_INSB NUMBER(5)
CNT_LNGSINCL_INSB NUMBER(5)
CNT_AUTHLKPINCL_INSB NUMBER(5)
CNT_SRCSINCL_INSB NUMBER(5)
CNT_COMPSINCL_INSB NUMBER(5)
CNT_COMPLISTINSB NUMBER(5)
CNT_SRCSLISTINSB NUMBER(5)
DFILT_SRCFAMILIES_NAVIGATOR NUMBER(2)
ODS_TMP_SEARCHPROCESSING HAS NO INDEXES
DESC ODS_TMP_SEARCHPROCESSING
Name Null Type
SEARCHID NUMBER(20)
CODE VARCHAR2(100)
REQUESTDATETIME DATE
Well you haven't posted the whole procedure, but a couple of comments.
Rather than the delete at the beginning, a truncate would be faster - or if it is really a temp table, a global temporary table could be used.
This depends on whether you need to keep the old data on a rollback though.
Second comment - do not use cursor loops to insert row-by-row (= slow-by-slow).
Use insert...select from instead.
If you need more help, you will need to give more detail, see: Re: 3. How to improve the performance of my query? / My query is running slow.
Similar Messages
-
how can we reduce query execution time?which methods we have to follow to optimization?
which methods we have to follow to optimization?First, read this informative thread:
How to post a SQL statement tuning request HOW TO: Post a SQL statement tuning request - template posting
and post the relevant details we need.
Execution plans and/or TRACE/TKPROF output can help you identifying performance bottlenecks. -
Methods to reduce query execution time
hi experts
can anybody sugest the steps /methods to reduce the time taken for query execution
Thanks and regards
PradeepHi Pradeep.........
I think u hav already posted a similar thread.........
query and load performance steps
Anyways also check these notes......
SAP Note : 557870............ 'FAQ BW Query Performance'
567746 'Composite note BW 3.x performance Query and Web'.
How to design a good query........
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
Also Check this.......
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Business Intelligence Performance Tuning [original link is broken]
Query Performance Improvement Tools
Query Performance Improvement Tools
Regards,
Debjani........ -
How Can I reduce Query Execution time with use of filter
Dear Experts,
In a query execution is faster when there is no product filter and these products can be a list contaning mare than 300 items.
I am using In operator for that filter.Maybe if you posted the quer[y][ies] we could get a better idea.
-
Reducing query execution time while handling large amt of data
Can you please give any suggestions for the reducing the time required for quries to return results which are fired on tables containing huge amount of data (trillions of records) ??
Realize that this is like getting a request to get someone a vehicle with no idea what it is to be used for.
Can you at least give us the query, ideas of data from each table and what indexes you have available. More would be better, but those would be things you'd have access to even if you are not experienced enough with Oracle to get us explain plans and such (yes, to really do this right you'd need that or need to work with someone who can get you that). -
How can I reduce BEx Query execution time
Hi,
I have a question regarding query execution time in BEx.
I have a query that takes 45 mins to 1 hour to execute in BEx analyser. This query is run on a daily basis and hence I am keen to reduce the execution time. Are there any programs or function modules that can help in reducing query execution time?
Thanks and Regards!Hi Sriprakash,
1.Check if your cube is performance tuned: in the manage cube from RSA1 / performance tab: check if all indexes and statistics are green. Aggregate IDx should as well be.
2.Condense your cubes regularly
3. Evaluate the creation of an aggregate with all characteristic used in the query (RSDDV).
4.Evaluate the creation of a "change run aggregate": based on a standalone NavAttr (without its basic char in the aggr.) but pay attention to the consequent change run when loading master data.
5. Partition (physically) your cubes systematically when possible (RSDCUBE, menu, partitioning)
6. Consider logical partitioning (by year or comp_code or ...) and make use of multiproviders in order to keep targets not too big...
7.Consider creating secondary indexes when reporting on ODS (RSDODS)
8.Check if the query runtime is due the master data read or the infoprovider itself, or the OLAP processor and/or any other cause in tx ST03N
9.Consider improving your master reads by creating customized IDX (BITMAP if possible and depending on your data) on master data table and/or attribute SIDs when using NAvs.
10.Check that your basis team did a good job and have applied the proper DB parameters
11.Last but not least: fine tune your datamodel precisely.
hope this will give you an idea.
Cheers
Sunil -
Reduce the execution time for the below query
Hi,
Please help me to reduce the execution time on the following query .. if any tuning is possible.
I have a table A with the columns :
ID , ORG_LINEAGE , INCLUDE_IND ( -- the org lineage is a string of ID's. If ID 5 reports to 4 and 4 to 1 .. the lineage for 5 will be stored as the string -1-4-5)
Below is the query ..
select ID
from A a
where INCLUDE_IND = '1' and
exists (
select 1
from A b
where b.ID = '5'
and b.ORG_LINEAGE like '%-'||a.ID||'-%'
order by ORG_LINEAGE;
The only constraint on the table A is the primary key on the ID column.
Following will be the execution plan :
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=406 Card=379 Bytes=2
653)
1 0 SORT (ORDER BY) (Cost=27 Card=379 Bytes=2653)
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'A' (Cost=24 Card
=379 Bytes=2653)
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'A' (Co
st=1 Card=1 Bytes=6)
5 4 INDEX (RANGE SCAN) OF 'ORG_LINEAGE'
(NON-UNIQUE)I order it by the org_lineage to get the first person. So it is a result problem? The order by doesn't give you the first person, it gives you a sorted result set (of which there may be zero, one, or thousands).
If you only want one row from that, then you're spending a lot of time tuning the wrong query.
How do you know which ORG_LINEAGE row you want?
Maybe it would help if you posted some sample data. -
How to reduce the query execution time
hai all,
We have created query on Purchasing Cube 0PUR_C01 for
Purchase Order (PO) analysis for single vendor materials, but it is taking long time to execute (about 45 sec...).
In the above Query we have used the following things:
In Columns:
i) Exceptional aggregation for maximum & minimum PO Net Price using reference characteristic as Calendar Day.
ii) Minimum PO Price value we have multiplied with Actual GR Quantity for the calculation of Impact of Lowest PO Net Price.
iii) Number of vendors calculated key figure.
In Rows:i) Only Material
In Filters:
i) Plant with variable select Option u2013 Optional.
ii) Calendar Year / Month with Select Option u2013 Optional.
iii) Material with excluded Unassigned (#).
iv) Vendor with excluded Unassigned (#).
Following are we have used for Performance:
i) Aggregates using Propose from query (only for this query).
ii) Partitioning on Calendar Year / Month (For 1 year 14 partitions) i.e. (04.2007 to 03.2008).
iii) Collapse.
iv) In RSRT we have set the following properties
Read Mode = H
Req.Status = 0
Catch Mode = 4
Persistence Mode = 3 (BLOB)
Optimization mode = 0.
Our inputs to this Query:
i) We are passing plant range 1201 to 1299.
ii) Calendar Year / Month 04.2007 to 03.2008.
So please suggest me how to reduce the execution time.
please help me.
Thanks,
kiran manyamHi,
First of all its a complete question with all the details. Good work.
As you partitioned the cube based on calmonth and you are also giving calmonth in selection, it will definitely work towards improved query performance.
As you are putting plant values in the selection, is there any aggregate available on plant characteristics? If not creating a aggregate on plant will help.
Regards,
Yogesh -
Oracle View that stores the Query execution time
Hi Gurus
i m using Oracle 10G in Unix. I wudiold like to know which Data dictionary view stores the execution of a query. If it is not stored then hw to find the query execution time other than (Set timing on) command. What is the use of elapsed time and what is the difference between execution time and elapsed time? How to calculate the execution time of a query.
THanks
RamIf you have a specific query you're going to run in SQL*Plus, just do
a 'set timing on' before you execute the query.
If you've got application SQL coming in from all over the place, you can
identify specific SQL in V$SQL/ and look at ELAPSED_TIME/EXECUTIONS
to get an average elapsed time.
If you've got an application running SQL, and you need to know the
specific timing of a specific execution (as opposed to an average),
you can use DBMS_SUPPORT to set trace in the session that your
application is running in, and then use TkProf to process the resulting
trace file. -
Table defination in datatype size can effect on query execution time.
Hello Oracle Guru,
I have one question , suppose I have create one table with more than 100 column
and i tacke every column datatype varchar2(4000).
Actual data are in every column not more than 300 character so in this case
if i execute only select query
so oracle cursor internaly read up to 4000 character one by one
or it read character one by one and in last character ex. 300 it will stop there.
If i reduce varchar2 size 300 instend of 4000 in table defination,
so is it effect on select query execution time ?
Thanks in advance.When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
SY. -
TO REDUCE THE EXECUTION TIME OF REPORT
HI,
CAN ANYONE TELL ME THAT, HOW CAN I REDUCE THE EXECUTION TIME OF THE REPORT. IS THERE ANY IDEA TO IMPROVE THE PERFORMANCE OF THE REPORT.Hi Santosh,
Good check out the following documentation
<b>Performance tuning</b>
For all entries
Nested selects
Select using JOINS
Use the selection criteria
Use the aggregated functions
Select with view
Select with index support
Select Into table
Select with selection list
Key access to multiple lines
Copying internal tables
Modifying a set of lines
Deleting a sequence of lines
Linear search vs. binary
Comparison of internal tables
Modify selected components
Appending two internal tables
Deleting a set of lines
Tools available in SAP to pin-point a performance problem
<b>Optimizing the load of the database</b>
For all entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Some steps that might make FOR ALL ENTRIES more efficient:
Removing duplicates from the the driver table
Sorting the driver table
If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
FOR ALL ENTRIES IN i_tab
WHERE mykey >= i_tab-low and
mykey <= i_tab-high.
Nested selects
The plus:
Small amount of data
Mixing processing and reading of data
Easy to code - and understand
The minus:
Large amount of data
when mixed processing isnt needed
Performance killer no. 1
Select using JOINS
The plus
Very large amount of data
Similar to Nested selects - when the accesses are planned by the programmer
In some cases the fastest
Not so memory critical
The minus
Very difficult to program/understand
Mixing processing and reading of data not possible
Use the selection criteria
SELECT * FROM SBOOK.
CHECK: SBOOK-CARRID = 'LH' AND
SBOOK-CONNID = '0400'.
ENDSELECT.
SELECT * FROM SBOOK
WHERE CARRID = 'LH' AND
CONNID = '0400'.
ENDSELECT.
Use the aggregated functions
C4A = '000'.
SELECT * FROM T100
WHERE SPRSL = 'D' AND
ARBGB = '00'.
CHECK: T100-MSGNR > C4A.
C4A = T100-MSGNR.
ENDSELECT.
SELECT MAX( MSGNR ) FROM T100 INTO C4A
WHERE SPRSL = 'D' AND
ARBGB = '00'.
Select with view
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T
WHERE DOMNAME = DD01L-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
SELECT * FROM DD01V
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
Select with index support
SELECT * FROM T100
WHERE ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
SELECT * FROM T002.
SELECT * FROM T100
WHERE SPRSL = T002-SPRAS
AND ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
ENDSELECT.
Select Into table
REFRESH X006.
SELECT * FROM T006 INTO X006.
APPEND X006.
ENDSELECT
SELECT * FROM T006 INTO TABLE X006.
Select with selection list
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
SELECT DOMNAME FROM DD01L
INTO DD01L-DOMNAME
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
Key access to multiple lines
LOOP AT TAB.
CHECK TAB-K = KVAL.
ENDLOOP.
LOOP AT TAB WHERE K = KVAL.
ENDLOOP.
Copying internal tables
REFRESH TAB_DEST.
LOOP AT TAB_SRC INTO TAB_DEST.
APPEND TAB_DEST.
ENDLOOP.
TAB_DEST[] = TAB_SRC[].
Modifying a set of lines
LOOP AT TAB.
IF TAB-FLAG IS INITIAL.
TAB-FLAG = 'X'.
ENDIF.
MODIFY TAB.
ENDLOOP.
TAB-FLAG = 'X'.
MODIFY TAB TRANSPORTING FLAG
WHERE FLAG IS INITIAL.
Deleting a sequence of lines
DO 101 TIMES.
DELETE TAB_DEST INDEX 450.
ENDDO.
DELETE TAB_DEST FROM 450 TO 550.
Linear search vs. binary
READ TABLE TAB WITH KEY K = 'X'.
READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
Comparison of internal tables
DESCRIBE TABLE: TAB1 LINES L1,
TAB2 LINES L2.
IF L1 <> L2.
TAB_DIFFERENT = 'X'.
ELSE.
TAB_DIFFERENT = SPACE.
LOOP AT TAB1.
READ TABLE TAB2 INDEX SY-TABIX.
IF TAB1 <> TAB2.
TAB_DIFFERENT = 'X'. EXIT.
ENDIF.
ENDLOOP.
ENDIF.
IF TAB_DIFFERENT = SPACE.
ENDIF.
IF TAB1[] = TAB2[].
ENDIF.
Modify selected components
LOOP AT TAB.
TAB-DATE = SY-DATUM.
MODIFY TAB.
ENDLOOP.
WA-DATE = SY-DATUM.
LOOP AT TAB.
MODIFY TAB FROM WA TRANSPORTING DATE.
ENDLOOP.
Appending two internal tables
LOOP AT TAB_SRC.
APPEND TAB_SRC TO TAB_DEST.
ENDLOOP
APPEND LINES OF TAB_SRC TO TAB_DEST.
Deleting a set of lines
LOOP AT TAB_DEST WHERE K = KVAL.
DELETE TAB_DEST.
ENDLOOP
DELETE TAB_DEST WHERE K = KVAL.
Tools available in SAP to pin-point a performance problem
The runtime analysis (SE30)
SQL Trace (ST05)
Tips and Tricks tool
The performance database
Optimizing the load of the database
Using table buffering
Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
Select DISTINCT
ORDER BY / GROUP BY / HAVING clause
Any WHERE clasuse that contains a subquery or IS NULL expression
JOIN s
A SELECT... FOR UPDATE
If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECR clause.
Use the ABAP SORT Clause Instead of ORDER BY
The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
Avoid ther SELECT DISTINCT Statement
As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
Good Luck and thanks
AK -
How to get query execution time without running...?
Hi ,
I had one requirement .... as follows ......
i had 3 sql statements . I need to execute only one sql which execution time is very less.
Can any one help me , how to get query execution time without running that query and without using explain plan..?
Thanks,
RajeshKim Berg Hansen wrote:
But you have ruled out explain plan for some reason, so I cannot help you.OP might get some answers if query was executed before - but since restart. Check V$SQL dynamic performance view for SQL_TEXT = your query. Then ROUND(ELAPSED_TIME / EXECUTIONS / 1000000) will give you average elapsed time.
SY.
Edited by: Solomon Yakobson on Apr 3, 2012 8:44 AM -
Dear SCN,
I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
Regards,
PRKHi Praveen
Go through this document for performance optimization using BICS connection
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&… -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
Query execution time estimation....
Hi All,
Is it possible to estimate query execution time using explain plan?
Thanks in advance,
Santosh.The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
Regards,
Randolf
Oracle related stuff:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Maybe you are looking for
-
Mini displayPort to HDMi Adapter for Macbook
ALCON, I need your assistance i recently purchased a mini displayPort to HDMI adapter for my Macbook to view my movies from ITUNES on my HDTV. The problem im having is there is no picture or sound on the HDTV once i hooked it up to my tv. My question
-
Could not create JVM in Windows 7 64-bit
Hello, I'm trying to launch my java web start application in a windows 7 64-bit machine (8 GB RAM) with the parameter -Xmx4g in the JNLP file (java version 1.6.0_27). I get the error "Could not create the Java Virtual Machine". When I reduce it to -X
-
How best to display a two column menu in Flash 8?
I have a flash 8 template and i am using Macromedia Flash Professional 8. I want to create a menu in a scrollable text box, which has two columns exactly aligned like so: Lamb £2.30
-
Socket I/O : mismatching object/text between client/server
Suppose I'm listening on a socket for an object ...and I'm doing a ObjectInputStream.readObject() on it, but my client just opens an OutputStream and sends me text instead of an object (or vice-versa: Client sends object but server listens for text)?
-
Unable to create File System Repository using Network Path
Hi All, I am trying to create a File System Repository. I created a networkpath and windows system and used the same while creating File Repository according to the steps in help. With out using Networkpath(i.e.,If I use IP address,I am able to creat