Select Query in Want unique time.
Please go through below detail
I have one table.
File Id, Date, Time
00001 10/12/2010 10:10
00001 10/12/2010 10:10
00001 10/12/2010 10:10
00001 10/12/2010 10:11
00002 10/12/2010 10:10
00002 10/12/2010 10:10
I want unique fileid,date and time in select query only.
If apply group by File Id, Date, Time then I want increment time +1.
New column
00001 10/12/2010 10:10 10:10
00001 10/12/2010 10:10 10:12 <-- We can not set 10:11 due to it's already exist in table.
00001 10/12/2010 10:10 10:13
00001 10/12/2010 10:11 10:11
In short i would like select fileid,date,time,(Id duplicate entry then time +1 to generate uniquness).
I have do this for myself but i face this problem.
Query to fatch below all duplicate row
time+1
00001 10/12/2010 10:10 10:10
00001 10/12/2010 10:10 10:11 <-- But this enty again become duplicate due to it's already exist in table.
00001 10/12/2010 10:10 10:12
Thanks in advance.
I want time as it is means 23:11 it should not convert to 11:11:00 PMThat has nothing to do with the query.
Just adjust your NLS_DATE_FORMAT:
SQL> alter session set nls_date_format='dd/mm/yyyy hh:mi:ss PM';
Session altered.
SQL> with t as(
2 select '00001' id,to_date('20101012 23:10','yyyymmdd hh24:mi') val from dual union all
3 select '00001', to_date('20101012 23:10','yyyymmdd hh24:mi') from dual union all
4 select '00001', to_date('20101012 23:59','yyyymmdd hh24:mi') from dual union all
5 select '00001', to_date('20101012 23:59','yyyymmdd hh24:mi') from dual union all
6 select '00002', to_date('20101012 10:10','yyyymmdd hh24:mi') from dual union all
7 select '00002', to_date('20101012 10:10','yyyymmdd hh24:mi') from dual)
8 select ID,Val
9 from t
10 model
11 partition by(ID)
12 dimension by(row_number() over(partition by id order by Val) as rn)
13 measures(Val)
14 rules(Val[rn > 1] order by rn
15 = greatest(Val[cv()-1]+interVal '1' minute,
16 Val[cv()]));
ID VAL
00001 12/10/2010 11:10:00 PM
00001 12/10/2010 11:11:00 PM
00001 12/10/2010 11:59:00 PM
00001 13/10/2010 12:00:00 AM
00002 12/10/2010 10:10:00 AM
00002 12/10/2010 10:11:00 AM
6 rows selected.
SQL> alter session set nls_date_format='dd/mm/yyyy hh24:mi';
Session altered.
SQL> with t as(
2 select '00001' id,to_date('20101012 23:10','yyyymmdd hh24:mi') val from dual union all
3 select '00001', to_date('20101012 23:10','yyyymmdd hh24:mi') from dual union all
4 select '00001', to_date('20101012 23:59','yyyymmdd hh24:mi') from dual union all
5 select '00001', to_date('20101012 23:59','yyyymmdd hh24:mi') from dual union all
6 select '00002', to_date('20101012 10:10','yyyymmdd hh24:mi') from dual union all
7 select '00002', to_date('20101012 10:10','yyyymmdd hh24:mi') from dual)
8 select ID,Val
9 from t
10 model
11 partition by(ID)
12 dimension by(row_number() over(partition by id order by Val) as rn)
13 measures(Val)
14 rules(Val[rn > 1] order by rn
15 = greatest(Val[cv()-1]+interVal '1' minute,
16 Val[cv()]));
ID VAL
00001 12/10/2010 23:10
00001 12/10/2010 23:11
00001 12/10/2010 23:59
00001 13/10/2010 00:00
00002 12/10/2010 10:10
00002 12/10/2010 10:11
6 rows selected.
Similar Messages
-
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Select query taking too much time to fetch data from pool table a005
Dear all,
I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
select * from a005 into table t_a005 for all entries in it_itab
where vkorg in s_vkorg
and matnr in s_matnr
and aplp in s_aplp
and kmunh = it_itab-kmunh.
here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
Thanks ,it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
In general, you should code the order of your fields in the select in the same order as they appear in the database. If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time. The more key fields from the beginning of the structure that you can supply at faster the retrieval.
Regards,
Brent -
BSAD table is taking more time in select query.
Hi ,
The below SELECT query is taking more time , there is no any secondary index is there .
Can anybody suggest how to improve it .
SELECT bukrs
kunnr
augdt
augbl
gjahr
belnr
budat
bldat
waers
xblnr
BLART
monat
shkzg
gsber
DMBTR
WRBTR
prctr
FROM BSAD INTO TABLE gt_bsad
WHERE bukrs = p_bukrs
AND kunnr IN so_kunnr
AND budat IN so_budat
AND xblnr IN so_xblnr
AND ( blart EQ 'DA' OR
blart EQ 'DZ' OR
blart EQ 'ZP' OR "D03K904574
blart EQ 'KZ' OR "D03K904574
blart EQ 'DP' )
AND PRCTR IN R_PC.
Thanks in advance
Regards
chetanHi Chetan ,
I will suggest you two things :
1. Try to add Secondary ( Non-unique) index on table BSAD with fields : mandt,bukrs,kunnr,budat,xblnr,blart,prctr.
but before adding this index test the selectivity of this index by going to Tcode DB05
2. In the select query you have used OR condition for blart. Instead of this try to create a ranges table for blart and append the values 'DA','DZ','ZP','KZ','DP' and use this in the select query. This will improve the performance for sure.
Hope this will help to ypu.
Regards,
Nikhil -
Select query takes long time....
Hi Experts,
I am using a select query in which inspection lot is in another table and order no. is in another table. this select query taking very long time, what is the problem in this query ? Pl. guide us.
select bPRUEFLOS bMBLNR bCPUDT aAUFNR amatnr aLGORT a~bwart
amenge aummat asgtxt axauto
into corresponding fields of table itab
*into table itab
from mseg as a inner join qamb as b
on amblnr = bmblnr
and azeile = bzeile
where b~PRUEFLOS in insp
and b~cpudt in date1
and b~typ = '3'
and a~bwart = '321'
and a~aufnr in aufnr1.
Yusufhi
instead of using 'move to corresponding of itab' fields use 'into table itab'.....
coz......if u use move to corresponding it will search for all the appropriate fields then it will place u r data........instead of that declare apprpiate internal table and use 'into table itab'.
and one more thing dont use joins ......coz joins will decrease u r performance .....so instead of that use 'for all entries' ....and mention all the key fields in where condition ........
ok
reward points for helpful answers -
Select Query not workin as expected
HI Guru's,
Here is a select query which takes more time when I incerase the search criteria . The Table I_MKPF is not initial.
select mblnr mjahr zeile bwart matnr werks charg lifnr
shkzg bwtar menge meins ebeln aufnr bukrs prctr
from mseg into corresponding fields of table z_rec
for all entries in i_mkpf
where
mblnr eq i_mkpf-mblnr and
mjahr eq i_mkpf-mjahr and
bwart in s_bwart and
werks in s_werks and
bukrs in s_bukrs.
In the above query MBLNR and MJAHR are primary key fields in the same order as in the database.
The query fetches 2 records and takes around 20 minutes.
I did some more testing and found one peculiar issue.
The query takes long time only when S_WERKS and/or S_BUKRS is entered on the selection screen.
If these two fields are left blank it works super quick.
I am baffled as to why this is happening. Should it not have been the other way around..?
Could please take a look and tell me what the issue might be??
Thanks in Advance,
ImranHi Imran,
please check the SQL performance in transaction ST05 > "Explaine one SQL request" for this slightly modified statement:
select mblnr, mjahr, zeile, bwart, matnr, werks, charg, lifnr,
shkzg, bwtar, menge, meins, ebeln, aufnr bukrs, prctr
from mseg
where
mblnr = '1' and
mjahr = '2' and
bwart in ('1', '2') and
werks in ('1', '2') and
bukrs in ('1', '2')
or
mblnr = '1' and
mjahr = '2' and
bwart in ('1', '2') and
werks in ('1', '2') and
bukrs in ('1', '2')
or
mblnr = '1' and
mjahr = '2' and
bwart in ('1', '2') and
werks in ('1', '2') and
bukrs in ('1', '2')
or
mblnr = '1' and
mjahr = '2' and
bwart in ('1', '2') and
werks in ('1', '2') and
bukrs in ('1', '2')
Please answer back with the results. Best regards,
Alvaro -
I'm using SQL Server 2008 R2 (10.50.4033) and I'm troubleshooting an issue that a select query against a specific view is taking more than 30 seconds consistently. The issue just starts happening this week and there is no mass changes in data.
The problem only occur if the query is issued from an IIS application but not from SSMS. One thing I noticed is that sys.dm_exec_cached_plans is returning 2 Parse Tree rows for the view - one created when the select query is issued
1st time from the IIS application and another one created when the same select query is issued 1st time from SSMS. The usecounts of the Parse Tree row for the view (the IIS one) is increasing whenever the select query is issued. The
usecounts of the Parse Tree row for the view (the SSMS one) does not increase when the select query is issued again.
There seems to be a correlation between the slowness of the query and the increasing of the usecounts of the Parse Tree row for the view.
I don't know why there is 2 Parse Tree rows for the view. There is also 2 Compiled Plan rows for the select query.
What does the Parse Tree row mean especially the usecounts column?>> The issue just starts happening this week and there is no mass changes in data.
There might be a mass changes in the execution plan for several reason without mass changes in data
If you have the old version and a way to check the old execution plan, and compare to the new one, that this should be your starting point. In most cases you don't have this option and we need to monitor from scratch.
>> The problem only occur if the query is issued from an IIS application but not from SSMS.
This mean that we know exactly what is the different and you can compare both execution plan. once you do it, you will find that they are no the same. But this is very common issue and we can know that it is a result of different SETting while connecting
from different application. SSMS is an external app like any app that you develop in Visual studio but the SSMS dose not use the Dot.Net default options.
Please check this link, to find the full explanation and solutions:
http://www.sommarskog.se/query-plan-mysteries.html
Take a look at sys.dm_exec_sessions for your ASP.Net application and for your SSMS session.
If you need more specific help, then we need more information and less stories :-)
We need to see the DDL+DML+Query and both execution plans
>> What does the Parse Tree row mean
I am not sure what you mean but the parse tree represents the logical steps necessary to execute the query that has been requested. you can check this tutorial about the execution plan: https://www.simple-talk.com/sql/performance/execution-plan-basics/ or
this one: http://www.developer.com/db/understanding-a-sql-server-query-execution-plan.html
>> regarding the usecount column or any other column check this link:
https://msdn.microsoft.com/en-us/library/ms187404.aspx?f=255&MSPPError=-2147217396.
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
Oracle SQL Select query takes long time than expected.
Hi,
I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
The query is as follows.
select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
Kindly advice what is going wrong??
Regards,
PurushothamSQL> @/alcatel/omc1/data/query.sql
2 ;
9 rows selected.
Execution Plan
Plan hash value: 3745571015
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS FULL | PARAMETERTABLE |
|* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
|* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
|* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
|* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
Predicate Information (identified by operation id):
5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
6 - access("P1"."ID"="F1"."ID")
7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
8 - access("F1"."ID"="ID")
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
0 bytes sent via SQL*Net to client
0 bytes received via SQL*Net from client
0 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rows processed
SQL> -
Select query taking more time..
Hi friends..
The below inner join statement is taking more time , can any body sugget me to improve the performance . I tried FOR ALL ENTRIES also but that also taking more time than inner join statement .
SELECT a~vbeln from vbap as a inner join vakpa as b
on avbeln = bvbeln
into corresponding fields of table IT_VAKPA
where a~WERKS IN S_IWERKS
and a~pstyv NE 'ZRS'
and b~vkorg = IVKORG
and b~audat IN IAUDAT
and b~vtweg IN IVTWEG.
Regards
ChetanHi Chetan ,
VAKPA is an index table. From the select query , it has been observed that you are not fetching any data from VAKPA. Only you have added some selection paramenters in where clause of select query.
My suggestion will be instead of using VAKPA in inner join you use VBAK along with VBAP. All the fields that you are using as selection condition from VAKPA are there in VBAK.
I am sure performance of query will be improved.
If still duo to business logic you need to use VAKPA, try to create secondary non unique index on fields VKORD,AUDATand VTWEG on table VAKPA.
However I will recommend you to go for first option only. If this does not work then go for second option.
Hopfully this will help you.
Regards,
Nikhil -
Want to Avoid Loop for all entries with select query !!
Hi Guru's !
This is my following code . I want to avoid loop to improve the performance of program.
data: lt_cuhd type HASHED TABLE OF /sapsll/cuhd WITH UNIQUE key guid_cuhd,
ls_cuhd type /sapsll/cuhd.
data: lt_comments type STANDARD TABLE OF zss_comments,
ls_comments type zss_comments.
data: lv_objkey type string.
select * from /sapsll/cuhd into table lt_cuhd.
loop at lt_cuhd into ls_cuhd.
CONCATENATE ls_cuhd-corder '%' into lv_objkey. " Example 'Mum%'
select * from zss_comments into table lt_comments
where objkey like lv_objkey
AND guid_cuhd = ls_cuhd-guid_cuhd
AND event_id <> ''.
endloop.
I want
New code should be...using all entries no loop required.
*select * from zss_comments into table lt_comments
where objkey like lv_objkey
AND guid_cuhd = ls_cuhd-guid_cuhd
AND event_id <> ''.*why dont you add the object key also to lt_cuhd and once you fetch the data to lt_cuhd loop it and add the '%'
when looping use field symbols so that you dont have to use modify.
then use for all entries using lt_cuhd
i don't you can find a better way to add the % mark apart from looping but by this way only one select query will be done for
zss_comments
Thanks
Nafran -
When I run the same query for the second time it's faster, I want to reset this behavior
I am running a query in oracle 11g select A from B where C = ':D' B has millions of records.
The first time i run it it takes about 30 seconds, the second time i run the query it takes about 1 second.
Obviously it's caching something and i want it to stop that, each time i run the query i want it to take 30s - just like it was running for the first time.
The reason I want to reset this is because for testing purposes I want to measure this query at the very first time.
Please help.
Thanks & Best Regards,
Darkuser9359353 wrote:
I am running a query in oracle 11g select A from B where C = ':D' B has millions of records.
The first time i run it it takes about 30 seconds, the second time i run the query it takes about 1 second.
Obviously it's caching something and i want it to stop that, each time i run the query i want it to take 30s - just like it was running for the first time.
The reason I want to reset this is because for testing purposes I want to measure this query at the very first time.
Please help.
Thanks & Best Regards,
Dark
No - you do NOT want to do that. Not if you want to get results that really represent how that query will work in reality.
See these two AskTom blogs where he discusses the reasons for NOT doing this in detail.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7413988573867
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:311990400346061304 -
Time Taking while firing a select Query
i am unable to access the table "X" in ABC schema
its taking toooooooooo much time while firing just a select query
how to resolve it
"SELECT digital_signing_cert
FROM X"could be due to huge table size,crosscheck this from user_segments view.
select bytes/1024/1024 "MB" from user_segments where segment_name='X';
also check database alert log for any error. -
TIME-OUT error in BSAK select query(Progress Indicator is also used)
Hi,
In my report program one select query is there on BSAK table, which is as follows --
SELECT BUKRS
BELNR
GJAHR
SHKZG
BSCHL
UMSKZ
LIFNR
EBELN
EBELP
WRBTR
DMBTR
XZAHL
REBZG
AUGBL
BLART
AUFNR
AUGDT
BUZEI FROM BSAK
INTO TABLE IT_BSAK
FOR ALL ENTRIES IN IT_BKPF1
WHERE BUKRS = IT_BKPF1-BUKRS
AND AUGDT = IT_BKPF1-BUDAT
AND AUGBL = IT_BKPF1-BELNR
AND BSCHL IN ('31' , '29', '26', '39', '25').
I used Progress Indicator befor running this query and after this query also. But still It's giving me TIME-OUT error in this select query only.
I run the same query for 10 records in IT_BKPF1 table, it runs perfectly. But when I run it for 1000 records it giving dump.
And in actual bussiness my records are always more than 100 only.
I also check the indexing. It having secondary indexing on this BUKRS, AUGDT, AUGBL fields. Then also it's giving error.
so, How can I solve this dump..?? What could be the reason..??
Thanks in advance...!!
Regards,
Poonam.Hi Poonam Patil,
Try to provide BELNR and GJAHR in where condition...
BKPF-DBBLG ==> BSAK-BELNR
Also check
BKPF-BLDAT ==> BSAK-AUGDT
Check out above relation...
If data is there in these fields of the table and both are matching then you can pass it and as they are in primary key of BSAK it will improve the performance...
Hope it will solve your problem..
Thanks & Regards
ilesh 24x7
ilesh Nandaniya -
Select Query taking long time to run second time
Hi All,
I have Oracle 11gR1 in windows server 2008 R2 .
I have some tables with 10 million records . When i run the select query for those tables first time it gives me result in 15 seconds but if i am running the same script second time from the same session I am getting the result in 15 minutes to complete ..
Why it is happening? What may be the solution for this ?
Thanks & Regards,
Vikash jain(Junior DBA)Hi Mohamed,
I just saw that both the times for the same query execution plan is different ..
here are the details :
First time Second Time
g84m3qqjv2p3q g84m3qqjv2p3q
2733045235 1310485984
So plz tell me how should i force database to use the first execution plan ?
I got this script for forcing the Db to use the same execution plan
accept sql_id -
prompt 'Enter value for sql_id: ' -
default 'X0X0X0X0'
accept plan_hash_value -
prompt 'Enter value for plan_hash_value: ' -
default 'X0X0X0X0'
accept fixed -
prompt 'Enter value for fixed (NO): ' -
default 'NO'
accept enabled -
prompt 'Enter value for enabled (YES): ' -
default 'YES'
accept plan_name -
prompt 'Enter value for plan_name (ID_sqlid_planhashvalue): ' -
default 'X0X0X0X0'
set feedback off
set sqlblanklines on
set serveroutput on
declare
l_plan_name varchar2(40);
l_old_plan_name varchar2(40);
l_sql_handle varchar2(40);
ret binary_integer;
l_sql_id varchar2(13);
l_plan_hash_value number;
l_fixed varchar2(3);
l_enabled varchar2(3);
major_release varchar2(3);
minor_release varchar2(3);
begin
select regexp_replace(version,'\..*'), regexp_substr(version,'[0-9]+',1,2) into major_release, minor_release from v$instance;
minor_release := 2;
l_sql_id := '&&sql_id';
l_plan_hash_value := to_number('&&plan_hash_value');
l_fixed := '&&fixed';
l_enabled := '&&enabled';
ret := dbms_spm.load_plans_from_cursor_cache(
sql_id=>l_sql_id,
plan_hash_value=>l_plan_hash_value,
fixed=>l_fixed,
enabled=>l_enabled);
if minor_release = '1' then
-- 11gR1 has a bug that prevents renaming Baselines
dbms_output.put_line(' ');
dbms_output.put_line('Baseline created.');
dbms_output.put_line(' ');
else
-- This statements looks for Baselines create in the last 4 seconds
select sql_handle, plan_name,
decode('&&plan_name','X0X0X0X0','SQLID_'||'&&sql_id'||'_'||'&&plan_hash_value','&&plan_name')
into l_sql_handle, l_old_plan_name, l_plan_name
from dba_sql_plan_baselines spb
where created > sysdate-(1/24/60/15);
ret := dbms_spm.alter_sql_plan_baseline(
sql_handle=>l_sql_handle,
plan_name=>l_old_plan_name,
attribute_name=>'PLAN_NAME',
attribute_value=>l_plan_name);
dbms_output.put_line(' ');
dbms_output.put_line('Baseline '||upper(l_plan_name)||' created.');
dbms_output.put_line(' ');
end if;
end;
undef sql_id
undef plan_hash_value
undef plan_name
undef fixed
set feedback on
Output:
Enter value for sql_id: g84m3qqjv2p3q
Enter value for plan_hash_value: 2733045235
Enter value for fixed (NO):
Enter value for enabled (YES):
Enter value for plan_name (ID_sqlid_planhashvalue): g84m3qqjv2p3q
old 16: l_sql_id := '&&sql_id';
new 16: l_sql_id := 'g84m3qqjv2p3q';
old 17: l_plan_hash_value := to_number('&&plan_hash_value');
new 17: l_plan_hash_value := to_number('2733045235');
old 18: l_fixed := '&&fixed';
new 18: l_fixed := 'NO';
old 19: l_enabled := '&&enabled';
new 19: l_enabled := 'YES';
old 40: decode('&&plan_name','X0X0X0X0','SQLID_'||'&&sql_id'||'_'||'&&plan_hash_value','&&plan_name')
new 40: decode('g84m3qqjv2p3q','X0X0X0X0','SQLID_'||'g84m3qqjv2p3q'||'_'||'2733045235','g84m3qqjv2p3q')
declare
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 39
Kindly help me to resolve the issue ..
Thanks & Regards,
Vikash Jain(Junior DBA) -
Hi All.
When i execute select query from View it takes about 00:00:45:12 sec to pull the data , but when i execute same query in some other system(different database with same table structure) it takes about 00:00:02:05 sec.
1)I have tried by dropped and recreated the index then i tried by exec dbms_stats.gather_table_stats procedure still no luck.
Please help me to understand the reason difference in response time
Thanks
sankardid you run the EXPLAIN PLAN?
Maybe you are looking for
-
Hi Community I have an issue on a live SP2013 farm that is really causing me grief. Although I ( Farm Admin) and my other users can follow items which are subsequently confirmed in our respective News Feeds - none of my users are are actually ge
-
how do u download backed up files for ipad2 lost all my contacts on my ipone
-
"Open Item Managed" for Bank GL accounts
I am trying to tell my users that they should set the bank GL accounts (not the bank clearing account) as "open item managed". Our users have been clearing the bank accounts down to make it exactly the same as the bank statement. All the SAP Helps
-
Why does my site appear differently in Chrome to IE
Hello. This problem has been bothering me for some time. I got asked to make a website for our work (I did some Web Design at uni, but I have pretty much been learning on the go doing this). It all looks fine in Chrome and FireFox but when you open i
-
T60p & Windows 7 Pro and Bluethoot problem with Access Connections.
Hi, I reinstalled Windows 7 Pro (64bit) at my T60p laptop. I use Access Connections (v. 5.97) manage my wi-fi, lan connections. (installed with all features lan, wi-fi, modem etc.) So here is problem. When I start Access Connection. Bluethoot goes ON