Reducing query execution time
how can we reduce query execution time?which methods we have to follow to optimization?
which methods we have to follow to optimization?First, read this informative thread:
How to post a SQL statement tuning request HOW TO: Post a SQL statement tuning request - template posting
and post the relevant details we need.
Execution plans and/or TRACE/TKPROF output can help you identifying performance bottlenecks.
Similar Messages
-
Methods to reduce query execution time
hi experts
can anybody sugest the steps /methods to reduce the time taken for query execution
Thanks and regards
PradeepHi Pradeep.........
I think u hav already posted a similar thread.........
query and load performance steps
Anyways also check these notes......
SAP Note : 557870............ 'FAQ BW Query Performance'
567746 'Composite note BW 3.x performance Query and Web'.
How to design a good query........
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
Also Check this.......
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Business Intelligence Performance Tuning [original link is broken]
Query Performance Improvement Tools
Query Performance Improvement Tools
Regards,
Debjani........ -
How Can I reduce Query Execution time with use of filter
Dear Experts,
In a query execution is faster when there is no product filter and these products can be a list contaning mare than 300 items.
I am using In operator for that filter.Maybe if you posted the quer[y][ies] we could get a better idea.
-
Need alternatives to reduce Query execution time
Hi All,
The following are my DB details:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
I have the following block of code that taking 5-10 minutes on an average - the stored proc containing this block has 3 other such blocks and each round execution is thus taking close to 30 minutes.
While the stored proc is expected to execute around 15 times an hour, the slowness is resulting in a maximum of 2-3 executions and placing the entire module in disarray. Can someone please guide me to tune this to execute faster?
From the STG_SEARCH_RESULT table, data is obtained based on ln_file_no value -- each ln_file_no maps to around 1000-5000 records in the table.
DELETE FROM ods_temp_searchprocessing;
FOR c IN
(SELECT
searchid ,
Regexp_substr(partcodes, '[^|]*', 1, no) partcodes,
requestdatetime
FROM
stg_search_result,
(SELECT
LEVEL no
FROM
dual
CONNECT BY LEVEL <=
(SELECT
( ( 2 * LEN ) + 1 )
FROM
(SELECT
MAX(LENGTH(partcodes ) - LENGTH(REPLACE( partcodes, '|')) ) LEN
FROM
stg_search_result
WHERE
apitransname IS NOT NULL
WHERE
Regexp_substr(partcodes, '[^|]*', 1, no) IS NOT NULL
AND filenumber = ln_file_no
AND is_valid = 'Y'
LOOP
INSERT
INTO
ods_tmp_searchprocessing
SEARCHID,
CODE ,
REQUESTDATETIME
VALUES
c.searchid ,
c.partcodes,
( To_timestamp(c.requestdatetime, 'yyyymmddhh24:mi:ss.ff4') )
END LOOP;
I later on use the ODS_TMP_SEARCHPROCESSING table to feed data into another big transactional table.
Here are the table structures for STG_SEARCH_RESULT and ODS_TMP_SEARCHPROCESSING:
STG_SEARCH_RESULT has an index on FILENUMBER column:
DESC STG_SEARCH_RESULT
Name Null Type
SEARCHID NUMBER(20)
CEID NUMBER(19)
USERID VARCHAR2(15)
NAMESPACE CHAR(2)
CTC VARCHAR2(50)
APC VARCHAR2(50)
SESSIONID VARCHAR2(400)
REQUESTDATETIME VARCHAR2(19)
LANGCODE VARCHAR2(10)
DESTPRODCODE VARCHAR2(10)
DEVICECODE VARCHAR2(10)
USRAGENT VARCHAR2(255)
BWSRNAME VARCHAR2(30)
BWSRVER VARCHAR2(100)
BWSRPLATFORM VARCHAR2(30)
SRCH_PAGE VARCHAR2(10)
CONT_IND VARCHAR2(5)
SAVED_FORM NUMBER(2)
FREETXT_INCL NUMBER(2)
CNCPTEXP_INCL NUMBER(2)
SRCSLIST_INCL VARCHAR2(5)
COMPSLIST_INCL VARCHAR2(5)
SUBS_INCL NUMBER(2)
INDS_INCL NUMBER(2)
REGNS_INCL NUMBER(2)
LNGS_INCL NUMBER(2)
SRCH_DTRNG VARCHAR2(10)
DEDUP_STNG VARCHAR2(10)
SRCHFREETXTIN VARCHAR2(5)
REPUBNWS_EXL NUMBER(2)
RECPRMKTDATA_EXL NUMBER(2)
ORBSPRTCAL_EXL NUMBER(2)
LEADSNTNCDISP VARCHAR2(5)
RSLTSORTORDER VARCHAR2(50)
PERSORGRP VARCHAR2(2)
SHARETYPE VARCHAR2(10)
HDLNDISP_REQ VARCHAR2(10)
DFILT_DATE NUMBER(2)
DFILT_COMPS NUMBER(2)
DFILT_SRCS NUMBER(2)
DFILT_SUBS NUMBER(2)
DFILT_INDS NUMBER(2)
DFILT_NWSCLST NUMBER(2)
DFILT_KWRD NUMBER(2)
DFILT_EXEC NUMBER(2)
SUCCESS_IND VARCHAR2(2)
ERRORCODE VARCHAR2(20)
TOTALHDLNSFND NUMBER(10)
UNQHDLNVWD NUMBER(10)
DUPHDLN NUMBER(10)
ENTRY_CREATEDMONTH VARCHAR2(8)
FILENUMBER NUMBER(20)
LINENUMBER NUMBER(20)
ENTRY_CREATEDDATE DATE
ACCTNUM VARCHAR2(15)
AUTHLKP_INCL NUMBER(2)
DFILT_AUTH NUMBER(2)
USECONSLENS NUMBER(2)
IS_VALID VARCHAR2(2)
AUTOCOMPLETEDTERM NUMBER(2)
RESPONSEDATETIME VARCHAR2(19)
APICONSUMERNAME VARCHAR2(255)
APICONSUMERDETAILS VARCHAR2(500)
APIVERSION VARCHAR2(25)
APICONSUMERVERSION VARCHAR2(250)
APITRANSNAME VARCHAR2(50)
ADDNLORGNDATA VARCHAR2(250)
SRCHMODE VARCHAR2(30)
SRCGENRE VARCHAR2(1000)
PARTCODES VARCHAR2(1000)
SEARCHLANGCODE VARCHAR2(250)
SNIPPETTYPE VARCHAR2(20)
BLACKLISTKEYWRDS VARCHAR2(1000)
DAYSRANGE VARCHAR2(50)
RSLTSOFFSET VARCHAR2(10)
RESPONSEFORMAT VARCHAR2(50)
REQUESTORIP VARCHAR2(50)
MODIFIEDSRCHIND VARCHAR2(1)
DIDYOUMEANUSAGE VARCHAR2(500)
USERINITIATEDIND VARCHAR2(1)
SRCHDTRNG_STARTDATE DATE
SRCHDTRNG_ENDDATE DATE
FILTER_FREETEXTTERMS VARCHAR2(4000)
FILTER_COMPANYCODES VARCHAR2(4000)
FILTER_INDUSTRYCODES VARCHAR2(4000)
FILTER_REGIONCODES VARCHAR2(4000)
FILTER_SUBJECTCODES VARCHAR2(4000)
FILTER_SOURCECODES VARCHAR2(4000)
FILTER_LANGUAGECODES VARCHAR2(4000)
FILTER_AUTHORS VARCHAR2(4000)
FILTER_EXECUTIVES VARCHAR2(4000)
FILTER_ACCESSIONNUMS VARCHAR2(4000)
WORDCNTUSEDIND VARCHAR2(2)
CNT_ANDOPERATOR NUMBER(4)
CNT_OROPERATOR NUMBER(4)
CNT_NOTOPERATOR NUMBER(4)
CNT_SAMEOPERATOR NUMBER(4)
CNT_FIRSTOPERATOR NUMBER(4)
CNT_ATLEASTOPERATOR NUMBER(4)
CNT_PHRASEOPERATOR NUMBER(4)
CNT_WITHINOPERATOR NUMBER(4)
CNT_NEAROPERATOR NUMBER(4)
CNT_WILDCARDOPERATOR NUMBER(4)
TOTALSRCHTRANSACTIONTIME NUMBER(10,5)
SEARCHQUERYSTRING VARCHAR2(4000)
SRCHPAGE_ADDNLDTL VARCHAR2(10)
CNT_ADDFREETXT_SRC NUMBER(4)
CNT_ADDFREETXT_AUTH NUMBER(4)
CNT_ADDFREETXT_COMP NUMBER(4)
CNT_ADDFREETXT_SUBJ NUMBER(4)
CNT_ADDFREETXT_INDS NUMBER(4)
CNT_ADDFREETXT_RGNS NUMBER(4)
CNT_FREETEXTTERMS NUMBER(4)
CNT_COMPANYCODES NUMBER(4)
CNT_INDUSTRYCODES NUMBER(4)
CNT_REGIONCODES NUMBER(4)
CNT_SUBJECTCODES NUMBER(4)
CNT_SOURCECODES NUMBER(4)
CNT_LANGUAGECODES NUMBER(4)
CNT_AUTHORS NUMBER(4)
CNT_EXECUTIVES NUMBER(4)
CNT_ACCESSIONNUMS NUMBER(4)
TERMCOUNT NUMBER(4)
EPVALUE VARCHAR2(3000)
PRIMARYSRCH_IND VARCHAR2(2)
APCU VARCHAR2(20)
DFILT_FLATRGNS_NAVIGATOR NUMBER(2)
CNT_SUBSINCL_INSB NUMBER(5)
CNT_INDSINCL_INSB NUMBER(5)
CNT_REGNSINCL_INSB NUMBER(5)
CNT_LNGSINCL_INSB NUMBER(5)
CNT_AUTHLKPINCL_INSB NUMBER(5)
CNT_SRCSINCL_INSB NUMBER(5)
CNT_COMPSINCL_INSB NUMBER(5)
CNT_COMPLISTINSB NUMBER(5)
CNT_SRCSLISTINSB NUMBER(5)
DFILT_SRCFAMILIES_NAVIGATOR NUMBER(2)
ODS_TMP_SEARCHPROCESSING HAS NO INDEXES
DESC ODS_TMP_SEARCHPROCESSING
Name Null Type
SEARCHID NUMBER(20)
CODE VARCHAR2(100)
REQUESTDATETIME DATEWell you haven't posted the whole procedure, but a couple of comments.
Rather than the delete at the beginning, a truncate would be faster - or if it is really a temp table, a global temporary table could be used.
This depends on whether you need to keep the old data on a rollback though.
Second comment - do not use cursor loops to insert row-by-row (= slow-by-slow).
Use insert...select from instead.
If you need more help, you will need to give more detail, see: Re: 3. How to improve the performance of my query? / My query is running slow. -
Reducing query execution time while handling large amt of data
Can you please give any suggestions for the reducing the time required for quries to return results which are fired on tables containing huge amount of data (trillions of records) ??
Realize that this is like getting a request to get someone a vehicle with no idea what it is to be used for.
Can you at least give us the query, ideas of data from each table and what indexes you have available. More would be better, but those would be things you'd have access to even if you are not experienced enough with Oracle to get us explain plans and such (yes, to really do this right you'd need that or need to work with someone who can get you that). -
How can I reduce BEx Query execution time
Hi,
I have a question regarding query execution time in BEx.
I have a query that takes 45 mins to 1 hour to execute in BEx analyser. This query is run on a daily basis and hence I am keen to reduce the execution time. Are there any programs or function modules that can help in reducing query execution time?
Thanks and Regards!Hi Sriprakash,
1.Check if your cube is performance tuned: in the manage cube from RSA1 / performance tab: check if all indexes and statistics are green. Aggregate IDx should as well be.
2.Condense your cubes regularly
3. Evaluate the creation of an aggregate with all characteristic used in the query (RSDDV).
4.Evaluate the creation of a "change run aggregate": based on a standalone NavAttr (without its basic char in the aggr.) but pay attention to the consequent change run when loading master data.
5. Partition (physically) your cubes systematically when possible (RSDCUBE, menu, partitioning)
6. Consider logical partitioning (by year or comp_code or ...) and make use of multiproviders in order to keep targets not too big...
7.Consider creating secondary indexes when reporting on ODS (RSDODS)
8.Check if the query runtime is due the master data read or the infoprovider itself, or the OLAP processor and/or any other cause in tx ST03N
9.Consider improving your master reads by creating customized IDX (BITMAP if possible and depending on your data) on master data table and/or attribute SIDs when using NAvs.
10.Check that your basis team did a good job and have applied the proper DB parameters
11.Last but not least: fine tune your datamodel precisely.
hope this will give you an idea.
Cheers
Sunil -
Reduce the execution time for the below query
Hi,
Please help me to reduce the execution time on the following query .. if any tuning is possible.
I have a table A with the columns :
ID , ORG_LINEAGE , INCLUDE_IND ( -- the org lineage is a string of ID's. If ID 5 reports to 4 and 4 to 1 .. the lineage for 5 will be stored as the string -1-4-5)
Below is the query ..
select ID
from A a
where INCLUDE_IND = '1' and
exists (
select 1
from A b
where b.ID = '5'
and b.ORG_LINEAGE like '%-'||a.ID||'-%'
order by ORG_LINEAGE;
The only constraint on the table A is the primary key on the ID column.
Following will be the execution plan :
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=406 Card=379 Bytes=2
653)
1 0 SORT (ORDER BY) (Cost=27 Card=379 Bytes=2653)
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'A' (Cost=24 Card
=379 Bytes=2653)
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'A' (Co
st=1 Card=1 Bytes=6)
5 4 INDEX (RANGE SCAN) OF 'ORG_LINEAGE'
(NON-UNIQUE)I order it by the org_lineage to get the first person. So it is a result problem? The order by doesn't give you the first person, it gives you a sorted result set (of which there may be zero, one, or thousands).
If you only want one row from that, then you're spending a lot of time tuning the wrong query.
How do you know which ORG_LINEAGE row you want?
Maybe it would help if you posted some sample data. -
How to reduce the query execution time
hai all,
We have created query on Purchasing Cube 0PUR_C01 for
Purchase Order (PO) analysis for single vendor materials, but it is taking long time to execute (about 45 sec...).
In the above Query we have used the following things:
In Columns:
i) Exceptional aggregation for maximum & minimum PO Net Price using reference characteristic as Calendar Day.
ii) Minimum PO Price value we have multiplied with Actual GR Quantity for the calculation of Impact of Lowest PO Net Price.
iii) Number of vendors calculated key figure.
In Rows:i) Only Material
In Filters:
i) Plant with variable select Option u2013 Optional.
ii) Calendar Year / Month with Select Option u2013 Optional.
iii) Material with excluded Unassigned (#).
iv) Vendor with excluded Unassigned (#).
Following are we have used for Performance:
i) Aggregates using Propose from query (only for this query).
ii) Partitioning on Calendar Year / Month (For 1 year 14 partitions) i.e. (04.2007 to 03.2008).
iii) Collapse.
iv) In RSRT we have set the following properties
Read Mode = H
Req.Status = 0
Catch Mode = 4
Persistence Mode = 3 (BLOB)
Optimization mode = 0.
Our inputs to this Query:
i) We are passing plant range 1201 to 1299.
ii) Calendar Year / Month 04.2007 to 03.2008.
So please suggest me how to reduce the execution time.
please help me.
Thanks,
kiran manyamHi,
First of all its a complete question with all the details. Good work.
As you partitioned the cube based on calmonth and you are also giving calmonth in selection, it will definitely work towards improved query performance.
As you are putting plant values in the selection, is there any aggregate available on plant characteristics? If not creating a aggregate on plant will help.
Regards,
Yogesh -
Table defination in datatype size can effect on query execution time.
Hello Oracle Guru,
I have one question , suppose I have create one table with more than 100 column
and i tacke every column datatype varchar2(4000).
Actual data are in every column not more than 300 character so in this case
if i execute only select query
so oracle cursor internaly read up to 4000 character one by one
or it read character one by one and in last character ex. 300 it will stop there.
If i reduce varchar2 size 300 instend of 4000 in table defination,
so is it effect on select query execution time ?
Thanks in advance.When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
SY. -
Dear SCN,
I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
Regards,
PRKHi Praveen
Go through this document for performance optimization using BICS connection
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&… -
Oracle View that stores the Query execution time
Hi Gurus
i m using Oracle 10G in Unix. I wudiold like to know which Data dictionary view stores the execution of a query. If it is not stored then hw to find the query execution time other than (Set timing on) command. What is the use of elapsed time and what is the difference between execution time and elapsed time? How to calculate the execution time of a query.
THanks
RamIf you have a specific query you're going to run in SQL*Plus, just do
a 'set timing on' before you execute the query.
If you've got application SQL coming in from all over the place, you can
identify specific SQL in V$SQL/ and look at ELAPSED_TIME/EXECUTIONS
to get an average elapsed time.
If you've got an application running SQL, and you need to know the
specific timing of a specific execution (as opposed to an average),
you can use DBMS_SUPPORT to set trace in the session that your
application is running in, and then use TkProf to process the resulting
trace file. -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
Query execution time estimation....
Hi All,
Is it possible to estimate query execution time using explain plan?
Thanks in advance,
Santosh.The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
Regards,
Randolf
Oracle related stuff:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Identifying query execution time
Hello,
I would like to know how can I figure out the actual query execution time in Oracle.
RegardsOracle Documentation is your best friend.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2113.htm#i1417057
ELAPSED_TIME --> Elapsed time (in microseconds) used by this cursor for parsing, executing, and fetching
Asif Momen
http://momendba.blogspot.com -
How to know query execution time in sql plus
HI
I want to know the query execution time in sql plus along with statistics
I say set time on ;
set autotrace on ;
select * from view where usr_id='abcd';
if the result is 300 rows it scrolls till all the rows are retrieved and finally gives me execution time as 40 seconds or 1 minute.. (this is after all the records are scrolled )
but when i execute it in toad it gives 350 milli seconds..
i want to see the execution time in sql how to do this
database server 11g and client is 10g
regards
rajwhat is the difference between .. the
statistics gathered in sql plus something like this and the one that i get from plan_table in toad?
how to format the execution plan I got in sqlplus in a proper understanding way?
statistics in sqlplus
tatistics
0 recursive calls
0 db block gets
164 consistent gets
0 physical reads
0 redo size
29805 bytes sent via SQL*Net to client
838 bytes received via SQL*Net from client
25 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
352 rows processedexecution plan in sqlplus... how to format this
xecution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=21 Card=1 Bytes=10
03)
1 0 HASH (UNIQUE) (Cost=21 Card=1 Bytes=1003)
2 1 MERGE JOIN (CARTESIAN) (Cost=20 Card=1 Bytes=1003)
3 2 NESTED LOOPS
4 3 NESTED LOOPS (Cost=18 Card=1 Bytes=976)
5 4 NESTED LOOPS (Cost=17 Card=1 Bytes=797)
6 5 NESTED LOOPS (OUTER) (Cost=16 Card=1 Bytes=685)
7 6 NESTED LOOPS (OUTER) (Cost=15 Card=1 Bytes=556
8 7 NESTED LOOPS (Cost=14 Card=1 Bytes=427)
9 8 NESTED LOOPS (Cost=5 Card=1 Bytes=284)
10 9 TABLE ACCESS (BY INDEX ROWID) OF 'USR_XR
EF' (TABLE) (Cost=4 Card=1 Bytes=67)
11 10 INDEX (RANGE SCAN) OF 'USR_XREF_PK' (I
NDEX (UNIQUE)) (Cost=2 Card=1)
12 9 TABLE ACCESS (BY INDEX ROWID) OF 'USR_DI
M' (TABLE) (Cost=1 Card=1 Bytes=217)
13 12 INDEX (UNIQUE SCAN) OF 'USR_DIM_PK' (I
NDEX (UNIQUE)) (Cost=0 Card=1)
14 8 TABLE ACCESS (BY INDEX ROWID) OF 'HDS_FCT'
(TABLE) (Cost=9 Card=1 Bytes=143)
15 14 INDEX (RANGE SCAN) OF 'HDS_FCT_IX2' (IND
EX) (Cost=1 Card=338)
16 7 TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_
COMM' (TABLE) (Cost=1 Card=1 Bytes=129)
17 16 INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK'
(INDEX (UNIQUE)) (Cost=0 Card=1)
18 6 TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_CO
MM' (TABLE) (Cost=1 Card=1 Bytes=129)
19 18 INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK' (
INDEX (UNIQUE)) (Cost=0 Card=1)
20 5 TABLE ACCESS (BY INDEX ROWID) OF 'PROD_DIM' (TAB
LE) (Cost=1 Card=1 Bytes=112)
21 20 INDEX (UNIQUE SCAN) OF 'PROD_DIM_PK' (INDEX (U
NIQUE)) (Cost=0 Card=1)
22 4 INDEX (UNIQUE SCAN) OF 'CUST_DIM_PK' (INDEX (UNIQU
E)) (Cost=0 Card=1)
23 3 TABLE ACCESS (BY INDEX ROWID) OF 'CUST_DIM' (TABLE)
(Cost=1 Card=1 Bytes=179)
24 2 BUFFER (SORT) (Cost=19 Card=22 Bytes=594)
25 24 INDEX (FAST FULL SCAN) OF 'PROD_DIM_AK1' (INDEX (UNI
QUE)) (Cost=2 Card=22 Bytes=594)
Maybe you are looking for
-
Error while activating Transformation RSO404
Hi All, When i am activating a transformation i am getting this error. The transformation was not getting active. Can u please let me know why it was happening. There are no syntax errors in the routine or in the start routine. regards, mahesh.
-
Problem Checking if File Exists When App is Launched
I'm developing an app which checks a file "lastuser.txt" to determine which database to open, as sometimes people share phones. Anyway, I'm trying to call TryGetItemAsync() in the OnLaunched() section in App.xaml.cs but I get the message "The name '
-
I did this threw my computer... I just can not get into my ipod...
-
Screen to JPEG image ... ???
Hi . How can i save what i have on the screen into a JPEG file ? Please help me .... i realy need some help ! Thanks!
-
Iweb Problem with internetExplorer 11
Hi, is there a chance to create a running website with iWeb 3.0.4 for Internet Explorer 11? best!