Insert select statement is taking ages
Sybase version: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008
Hi guyz,
I have a question about the performance of a statement that is very slow and I'd like to have you input.
I have the SQL statement below that is taking ages to execute and I can't find out how to imporve it
insert SST_TMP_M_TYPE select M_TYPE from MKT_OP_DBF M join TRN_HDR_DBF T on M.M_ORIGIN_NB=T.M_NB where T.M_LTI_NB=@Nson_lti
@Nson_lti is the same datatype as T.M_LTI_NB
M.M_ORIGIN_NB=T.M_NB have the same datatype
TRN_HDR_DBF has 1424951 rows and indexes on M_LTI_NB and M_NB
table MKT_OP_DBF has 870305 rows
table MKT_OP_DBF has an index on M_ORIGIN_NB column
Statistics for index: "MKT_OP_ND7" (nonclustered)
Index column list: "M_ORIGIN_NB"
Leaf count: 3087
Empty leaf page count: 0
Data page CR count: 410256.0000000000000000
Index page CR count: 566.0000000000000000
Data row CR count: 467979.0000000000000000
First extent leaf pages: 0
Leaf row size: 12.1161512343373872
Index height: 2
The representaion of M_ORIGIN_NB is
Statistics for column: "M_ORIGIN_NB"
Last update of column statistics: Mar 9 2015 10:48:57:420AM
Range cell density: 0.0000034460903826
Total density: 0.0053334921767125
Range selectivity: default used (0.33)
In between selectivity: default used (0.25)
Histogram for column: "M_ORIGIN_NB"
Column datatype: numeric(10,0)
Requested step count: 20
Actual step count: 20
Step Weight Value
1 0.00000000 < 0
2 0.07300889 = 0
3 0.05263098 <= 5025190
4 0.05263098 <= 9202496
5 0.05263098 <= 12664456
6 0.05263098 <= 13129478
7 0.05263098 <= 13698564
8 0.05263098 <= 14735554
9 0.05263098 <= 15168461
10 0.05263098 <= 15562067
11 0.05263098 <= 16452862
12 0.05263098 <= 16909265
13 0.05263212 <= 17251573
14 0.05263098 <= 18009609
15 0.05263098 <= 18207523
16 0.05263098 <= 18404113
17 0.05263098 <= 18588398
18 0.05263098 <= 18793585
19 0.05263098 <= 18998992
20 0.03226340 <= 19574408
If I look at the showplan, I can see indexes on TRN_HDR_DBF are used but now the one on MKT_OP_DBF
QUERY PLAN FOR STATEMENT 16 (at line 35).
STEP 1
The type of query is INSERT.
The update mode is direct.
FROM TABLE
MKT_OP_DBF
M
Nested iteration.
Table Scan.
Forward scan.
Positioning at start of table.
Using I/O Size 32 Kbytes for data pages.
With LRU Buffer Replacement Strategy for data pages.
FROM TABLE
TRN_HDR_DBF
T
Nested iteration.
Index : TRN_HDR_NDX_NB
Forward scan.
Positioning by key.
Keys are:
M_NB ASC
Using I/O Size 4 Kbytes for index leaf pages.
With LRU Buffer Replacement Strategy for index leaf pages.
Using I/O Size 4 Kbytes for data pages.
With LRU Buffer Replacement Strategy for data pages.
TO TABLE
SST_TMP_M_TYPE
Using I/O Size 4 Kbytes for data pages.
I was expecting the query to use the index also on MKT_OP_DBF
Thanks for your advices
Simon
The total density number for the MKT_OP_DBF.M_ORIGIN_NB column don't look very good:
Range cell density: 0.0000034460903826
Total density: 0.0053334921767125
Notice the total density value is 3 magnitudes larger than the range density ... which can indicate a largish number of duplicates. (NOTE: This wide difference between range cell and total density can be referred to as 'skew' - more on this later.)
Do some M_ORIGIN_NB values have a large number of duplicates? What does the following query return:
=====================
select top 30 M_ORIGIN_NB, count(*)
from MKT_OP_DBF
group by M_ORIGIN_NB
order by 2 desc, 1
=====================
The total density can be used to estimate the number of rows expected for a join (eg, TRN_HDR_DBF --> MKT_OP_DBF). The largish total density number, when thrown into the optimizer's calculations, may be causing the optimizer to think that the volume of *possible* joins will be more expensive than a join in the opposite direction (MKT_OP_DBF --> TRN_HDR_DBF) which in turn means (as Jeff's pointed out) that you end up table scanning MKT_OP_DBF (as the outer table) because of no SARGs.
From your description it sounds like you've got the necessary indexes to support a TRN_HDR_DBF --> MKT_OP_DBF join order. (Though it wouldn't hurt to see the complete output from sp_helpindex for both tables just to make sure we're on the same sheet of music.)
Without more details (eg, complete stats for both tables, sp_help for both tables - if you decide to post these I'd recommend posting them as a *.txt attachment).
I'm assuming you *know* that a join from TRN_NDR_DBF --> MKT_OP_DBF should be much quicker than what you're currently seeing. If this is the case, I'd probably want to start with:
=====================
exec sp_modifystats MKT_OP_DBF, M_ORIGIN_NB, REMOVE_SKEW_FROM_DENSITY
go
exec sp_recompile MKT_OP_DBF
go
-- run your query again
=====================
By removing the skew from the total density (ie, set total density = range cell density = 0.00000344...) you're telling the optimizer that it can expect a much smaller number of joins for the join order of TRN_HDR_DBF --> MKT_OP_DBF ... and that may be enough for the optimizer to use TRN_HDR_DBF to drive the query.
NOTE: If sp_modifystats/REMOVE_SKEW_FROM_DENSITY provides the desired join order, keep in mind that you'll need to re-issue this command after each update stats command that modifies the stats on the M_ORIGIN_NB column. For example, modify your update stats maintenance job to issue sp_modifystats/REMOVE_SKEW_FROM_DENSITY for those special cases where you know it helps query performance.
Similar Messages
-
Smart scan not working with Insert Select statements
We have observed that smart scan is not working with insert select statements but works when select statements are execute alone.
Can you please help us to explain this behavior?There is a specific exadata forum - you would do better to post the question there: Exadata
I can't give you a definitive answer, but it's possible that this is simply a known limitation similar to the way that "Create table as select" won't run the select statement the same way as the basic select if it involves a distributed query.
Regards
Jonathan Lewis -
Insert select statement or insert in cursor
hi all,
i need a performance compare for: insert operation in cursor one by one and select statement insert,
for example:
1. insert into TableA (ColumA,ColumnB) select A, B from TableB;
2. cursor my_cur is select A, B from TableB
for my_rec in my_cur loop
insert into TableA (ColumA,ColumnB) values (my_rec.A, my_rec.B);
end loop;
also "bulk collect into" can be used.
Which one has a better performance?
Thanks for your help,
kadriyeWhat's stopping you from making 100,000 rows of test data and trying it for yourself?
Edit: I was bored enough to do it myself.
Starting insert as select 22-JUL-08 11.43.19.544000000 +01:00
finished insert as select. 22-JUL-08 11.43.19.825000000 +01:00
starting cursor loop 22-JUL-08 11.43.21.497000000 +01:00
finished cursor loop 22-JUL-08 11.43.35.185000000 +01:00
The two second gap between the two is for the delete.
Message was edited by:
Dave Hemming -
Select statement is taking lot of time for the first time Execution.?
Hi Experts,
I am facing the following issue. I am using one select statement to retrieve all the contracts from the table CACS_CTRTBU according to FOR ALL ENTRIES restriction.
if p_lt_zcacs[] is not initial.
SELECT
appl ctrtbu_id version gpart
busi_begin busi_end tech_begin tech_end
flg_cancel_obj flg_cancel_vers int_title
FROM cacs_ctrtbu INTO TABLE lt_cacs FOR ALL ENTRIES IN p_lt_zcacs
WHERE
appl EQ gv_appl
AND ctrtbu_id EQ p_lt_zcacs-ctrtbu_id
AND ( flg_cancel_vers EQ '' OR version EQ '000000' )
AND flg_cancel_obj EQ ''
AND busi_begin LE p_busbegin
AND busi_end GT p_busbegin.
endif.
The WHERE condition is in order with the available Index. The index has APPL,CTRTBU_ID,FLG_CANCEL_VERS and FLG_CANCEL_OBJ.
The technical settings of table CACS_CTRTBU says that the "Buffering is not allowed"
Now the problem is , for the first time execution of this select statement, with 1.5 lakh entries in P_LT_ZCACS table, the select statement takes 3 minutes.
If I execute this select statement again, in another run with Exactly the same parameter values and number of entries in P_LT_ZCACS ( i.e 1.5 lakh entries), it gets executed in 3-4 seconds.
What can be the issue in this case? Why first execution takes longer time?.. Or is there any way to modify the Select statemnt to get better performance.
Thanks in advance
Sreejith A PHi,
>
sree jith wrote:
> What can be the issue in this case? Why first execution takes longer time?..
> Sreejith A P
Sounds like caching or buffering in some layer down the i/o stack. Your first execution
seems to do the "physical I/O" where your following executions can use the caches / buffers
that are filled by your first exectution.
>
sree jith wrote:
> Or is there any way to modify the Select statemnt to get better performance.
> Sreejith A P
If modifying your SELECTS statement or your indexes could help depends on your access details:
does your internal table P_LT_ZCACS contain duplicates?
how do your indexes look like?
how does your execution plan look like?
what are your execution figures in ST05 - Statement Summary?
(nr. of executions, records in total, total time, time per execuiton, records per execution, time per record,...)
Kind regards,
Hermann -
Secondary Index Select Statement Problem
Hi friends.
I have a issue with a select statement using secondary index,
SELECT SINGLE * FROM VEKP WHERE VEGR4 EQ STAGE_DOCK
AND VEGR5 NE SPACE
AND WERKS EQ PLANT
%_HINTS ORACLE
'INDEX("&TABLE&" "VEKP~Z3" "VEKP^Z3" "VEKP_____Z3")'.
given above statement is taking long time for processing.
when i check for the same secondary index in vekp table i couldn't see any DB index name with vekp~z3 or vekp^z3 or vekp____z3.
And the sy-subrc value after select statement is 4. (even though values avaliable in VEKP with given where condition values)
My question is why my select statement is taking long time and sy-subrc is 4?
what happens if a secnodary index given in select statement, which is not avaliable in that DB Table?Hi,
> ONe more question: is it possible to give more than one index name in select statement.
yes you can:
read the documentation:
http://download.oracle.com/docs/cd/A97630_01/server.920/a96533/hintsref.htm#5156
index_hint:
This hint can optionally specify one or more indexes:
- If this hint specifies a single available index, then the optimizer performs
a scan on this index. The optimizer does not consider a full table scan or
a scan on another index on the table.
- If this hint specifies a list of available indexes, then the optimizer
considers the cost of a scan on each index in the list and then performs
the index scan with the lowest cost. The optimizer can also choose to
scan multiple indexes from this list and merge the results, if such an
access path has the lowest cost. The optimizer does not consider a full
table scan or a scan on an index not listed in the hint.
- If this hint specifies no indexes, then the optimizer considers the
cost of a scan on each available index on the table and then performs
the index scan with the lowest cost. The optimizer can also choose to
scan multiple indexes and merge the results, if such an access path
has the lowest cost. The optimizer does not consider a full table scan.
Kind regards,
Hermann -
Number of rows inserted is different in bulk insert using select statement
I am facing a problem in bulk insert using SELECT statement.
My sql statement is like below.
strQuery :='INSERT INTO TAB3
(SELECT t1.c1,t2.c2
FROM TAB1 t1, TAB2 t2
WHERE t1.c1 = t2.c1
AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
EXECUTE IMMEDIATE strQuery ;
These SQL statements are inside a procedure. And this procedure is called from C#.
The number of rows returned by the "SELECT" query is 70.
On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
Anybody faced these kind of issues?
Can anyone tell what would be the reason of this issue..? any other work around for this...?
I am using Oracle 10g R2 version.
Edited by: user13339527 on Jun 29, 2010 3:55 AM
Edited by: user13339527 on Jun 29, 2010 3:56 AMYou have very likely concurrent transactions on the database:
>
By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
>
If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
You can try to run your test with:
set transaction isolation level serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
like:
wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46 -
Hi,
I am trying to insert values using select statement. But this is not working
INSERT INTO contribution_temp_upgrade
(PRO_ID,
OBJECT_NAME,
DELIVERY_DATE,
MODULE_NAME,
INDUSTRY_CATERGORIZATION,
ADVANTAGES,
REUSE_DETAILS)
VALUES
SELECT
:P1_PROJECTS,
wwv_flow.g_f08(vRow),
wwv_flow.g_f09(vRow),
wwv_flow.g_f10(vRow),
wwv_flow.g_f11(vRow),
wwv_flow.g_f12(vRow),
wwv_flow.g_f13(vRow)
FROM DUAL;
Please let me know what i am missing..
Thanks
SudhirTry this
INSERT INTO contribution_temp_upgrade
(PRO_ID,
OBJECT_NAME,
DELIVERY_DATE,
MODULE_NAME,
INDUSTRY_CATERGORIZATION,
ADVANTAGES,
REUSE_DETAILS)
SELECT
:P1_PROJECTS,
wwv_flow.g_f08(vRow),
wwv_flow.g_f09(vRow),
wwv_flow.g_f10(vRow),
wwv_flow.g_f11(vRow),
wwv_flow.g_f12(vRow),
wwv_flow.g_f13(vRow)
FROM DUAL;Note: when you are selecting a value using select statement, you should not specify the keyword "values".
i assume you have already assigned value for your bind variable :P1_PROJECTS and rest of the functions will return some value.
Regards,
Prazy -
How to insert variable value using select statement - Oracle function
Hi,
I have a function which inserts record on basis of some condition
INSERT INTO Case
Case_ID,
Case_Status,
Closure_Code,
Closure_Date
SELECT newCaseID,
caseStatus,
Closure_Code,
Closure_Date,
FROM Case
WHERE Case_ID = caseID
Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
how can i use this.
thanksHi,
I have a function which inserts record on basis of some condition
INSERT INTO Case
Case_ID,
Case_Status,
Closure_Code,
Closure_Date
SELECT newCaseID,
caseStatus,
Closure_Code,
Closure_Date,
FROM Case
WHERE Case_ID = caseID
Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
how can i use this. Do not select Case_Status from inner select, so null will be inserted then after inserting it update the case status with m_caseStatus.
Regards. -
Create table as select (CTAS)statement is taking very long time.
Hi All,
One of my procedure run a create table as select statement every month.
Usually it finishes in 20 mins. for 6172063 records and 1 hour in 13699067.
But this time it is taking forever even for 38076 records.
When I checked all it is doing is CPU usage. No I/O.
I did a count(*) using the query it brought results fine.
BUT CTAS keeps going on.
I'm using Oracle 10.2.0.4 .
main table temp_ip has 38076
table nhs_opcs_hier has 26769 records.
and table nhs_icd10_hier has 49551 records.
Query is as follows:
create table analytic_hes.temp_ip_hier as
select b.*, (select nvl(max(hierarchy), 0)
from ref_hd.nhs_opcs_hier a
where fiscal_year = b.hd_spell_fiscal_year
and a.code in
(primary_PROCEDURE, secondary_procedure_1, secondary_procedure_2,
secondary_procedure_3, secondary_procedure_4, secondary_procedure_5,
secondary_procedure_6, secondary_procedure_7, secondary_procedure_8,
secondary_procedure_9, secondary_procedure_10,
secondary_procedure_11, secondary_procedure_12)) as hd_procedure_hierarchy,
(select nvl(max(hierarchy), 0) from ref_hd.nhs_icd10_hier a
where fiscal_year = b.hd_spell_fiscal_year
and a.code in
(primary_diagnosis, secondary_diagnosis_1,
secondary_diagnosis_2, secondary_diagnosis_3,
secondary_diagnosis_4, secondary_diagnosis_5,
secondary_diagnosis_6, secondary_diagnosis_7,
secondary_diagnosis_8, secondary_diagnosis_9,
secondary_diagnosis_10, secondary_diagnosis_11,
secondary_diagnosis_12, secondary_diagnosis_13,
secondary_diagnosis_14)) as hd_diagnosis_hierarchy
from analytic_hes.temp_ip b
Any help would be greatly appreciatedHello
This is a bit of a wild card I think because it's going to require 14 fill scans of the temp_ip table to unpivot the diagnosis and procedure codes, so it's lilkely this will run slower than the original. However, as this is a temporary table, I'm guessing you might have some control over its structure, or at least have the ability to sack it and try something else. If you are able to alter this table structure, you could make the query much simpler and most likely much quicker. I think you need to have a list of procedure codes for the fiscal year and a list of diagnosis codes for the fiscal year. I'm doing that through the big list of UNION ALL statements, but you may have a more efficient way to do it based on the core tables you're populating temp_ip from. Anyway, here it is (as far as I can tell this will do the same job)
WITH codes AS
( SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
primary_PROCEDURE procedure_code,
primary_diagnosis diagnosis_code,
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_1 procedure_code,
secondary_diagnosis_1 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_2 procedure_code ,
secondary_diagnosis_2 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_3 procedure_code,
secondary_diagnosis_3 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_4 procedure_code,
secondary_diagnosis_4 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_5 procedure_code,
secondary_diagnosis_5 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_6 procedure_code,
secondary_diagnosis_6 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_7 procedure_code,
secondary_diagnosis_7 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_8 procedure_code,
secondary_diagnosis_8 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_9 procedure_code,
secondary_diagnosis_9 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_10 procedure_code,
secondary_diagnosis_10 diagnosis_code
FROM
temp_ip
UNION ALL
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_11 procedure_code,
secondary_diagnosis_11 diagnosis_code
FROM
temp_ip
SELECT
bd.primary_key_column_s,
hd_spell_fiscal_year,
secondary_procedure_12 procedure_code,
secondary_diagnosis_12 diagnosis_code
FROM
temp_ip
), hd_procedure_hierarchy AS
( SELECT
NVL (MAX (a.hierarchy), 0) hd_procedure_hierarchy,
a.fiscal_year
FROM
ref_hd.nhs_opcs_hier a,
codes pc
WHERE
a.fiscal_year = pc.hd_spell_fiscal_year
AND
a.code = pc.procedure_code
GROUP BY
a.fiscal_year
),hd_diagnosis_hierarchy AS
( SELECT
NVL (MAX (a.hierarchy), 0) hd_diagnosis_hierarchy,
a.fiscal_year
FROM
ref_hd.nhs_icd10_hier a,
codes pc
WHERE
a.fiscal_year = pc.hd_spell_fiscal_year
AND
a.code = pc.diagnosis_code
GROUP BY
a.fiscal_year
SELECT b.*, a.hd_procedure_hierarchy, c.hd_diagnosis_hierarchy
FROM analytic_hes.temp_ip b,
LEFT OUTER JOIN hd_procedure_hierarchy a
ON (a.fiscal_year = b.hd_spell_fiscal_year)
LEFT OUTER JOIN hd_diagnosis_hierarchy c
ON (c.fiscal_year = b.hd_spell_fiscal_year)HTH
David -
Select Statement takes more time after immediate insert statement..
Hello,
I found below scenario
1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
I didnt get anything in XPLAN.. :(
I felt Buffer Clean is cause to take time. please let me know your opinion.
Thanks in Advance
Sachsach09 wrote:
Hello,
I found below scenario
1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
I didnt get anything in XPLAN.. :(
I felt Buffer Clean is cause to take time. please let me know your opinion.Are you running the query in the other session after running it from the first?
Aman.... -
Same select statement taking more time
Hello all,
I have two select statements. only the name of table from where it is fetching records are different.
1) select belnr posnr etenr into corresponding fields of table it_cdtemp2
from j_3avasso for all entries in it_cdtemp1
where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
it_cdtemp1 has 100 entries and j_3avasso has 20000 entries
2) select belnr posnr etenr into corresponding fields of table it_cdtemp2
from j_3avap for all entries in it_cdtemp1
where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
it_cdtemp1 has 100 entries and j_3avasso has 2000 entries
statement 1 is executing less than a minute where as statement 2 is taking around 15 to 20 minutes
could anyone suggest why.. if so how to minimize run time
Regards
BalaHi,
You can sort the internal table before using FOR ALL ENTRIES BY VBELN and POSNR.
This will save a lot of processing time.
You can also try combing both the selects as one join statement taking both the tables with for all entries addition.
Regards,
Subhashini
Edited by: Subhashini K on Oct 8, 2009 2:58 PM -
Select statement taking much time.......
Hi,
IF NOT i601[] is initial.
select vbelv
posnv
vbeln
posnn
vbtyp_v
matnr
from vbfa into table ivbfa
FOR ALL ENTRIES IN i601
where vbeln = i601-mblnr and
posnn = i601-zeile2 and
vbtyp_v = 'J'.
select vbeln
matnr
werks
lgort
vgbel
vgpos
mwsbp
from vbrp into table ivbrp
FOR ALL ENTRIES IN ivbfa
where vgbel = ivbfa-vbelv and
vgpos = ivbfa-posnv and
vgtyp = 'J' and
werks IN werks.
CLEAR i601.
FREE i601.
ENDIF.
At the above highlighted select statement it is getting stucked up.I was not able to figure out the reason.No loops nothing but still is not moving from the second select statement,quite a long time it is taking.Can any one here throw some light.By the way none of the fields in the where clause of the 2nd select are primary keys.
Thanks,
K.Kiran.Hi,
In second table you are tring to extract the records without passing the primary key values...
Any how you have values vbeln ,posnr in internal table i601.So pass those values to VBRP.
IF NOT i601[] is initial.
select vbelv
posnv
vbeln
posnn
vbtyp_v
matnr
from vbfa into table ivbfa
FOR ALL ENTRIES IN i601
where vbeln = i601-mblnr and
posnn = i601-zeile2 and
vbtyp_v = 'J'.
IF NOT ivbfa[] IS INITIAL
select vbeln
matnr
werks
lgort
vgbel
vgpos
mwsbp
from vbrp into table ivbrp
FOR ALL ENTRIES IN ivbfa
where *vbeln = ivbfa-vbeln AND *
posnr = ivbfa-posnv AND
vgbel = ivbfa-vbelv and
vgpos = ivbfa-posnv and
vgtyp = 'J' and
werks IN werks.
ENDIF.
CLEAR i601.
FREE i601.
ENDIF.
Now check your program.
Pls. reward if useful..... -
SQL insert with select statement having strange results
So I have the below sql (edited a bit). Now here's the problem.
I can run the select statement just fine, i get 48 rows back. When I run with the insert statement, a total of 9062 rows are inserted. What gives?
<SQL>
INSERT INTO mars_aes_data
(rpt_id, shpdt, blno, stt, shpr_nad, branch_tableS, csgn_nad,
csgnnm1, foreign_code, pnt_des, des, eccn_no, entity_no,
odtc_cert_ind, dep_date, equipment_no, haz_flag, schd_no,
schd_desc, rec_value, iso_ulti_dest, odtc_exempt, itn,
liscence_no, liscence_flag, liscence_code, mblno, mot,
cntry_load, pnt_load, origin_state, airline_prefix, qty1, qty2,
ref_val, related, routed_flag, scac, odtc_indicator, seal_no,
line_no, port_export, port_unlading, shipnum, shprnm1, veh_title,
total_value, odtc_cat_code, unit1, unit2)
SELECT 49, schemaP.tableS.shpdt, schemaP.tableS.blno,
schemaP.tableS.stt, schemaP.tableS.shpr_nad,
schemaP.tableM.branch_tableS, schemaP.tableS.csgn_nad,
schemaP.tableS.csgnnm1, schemaP.tableD.foreign_code,
schemaP.tableS.pnt_des, schemaP.tableS.des,
schemaP.tableD.eccn_no, schemaP.tableN.entity_no,
schemaP.tableD.odtc_cert_ind, schemaP.tableM.dep_date,
schemaP.tableM.equipment_no, schemaP.tableM.haz_flag,
schemaP.tableD.schd_no, schemaP.tableD.schd_desc,
schemaP.tableD.rec_value,
schemaP.tableM.iso_ulti_dest,
schemaP.tableD.odtc_exempt, schemaP.tableM.itn,
schemaP.tableD.liscence_no,
schemaP.tableM.liscence_flag,
schemaP.tableD.liscence_code, schemaP.tableS.mblno,
schemaP.tableM.mot, schemaP.tableS.cntry_load,
schemaP.tableS.pnt_load, schemaP.tableM.origin_state,
schemaP.tableM.airline_prefix, schemaP.tableD.qty1,
schemaP.tableD.qty2,
schemaC.func_getRefs@link (schemaP.tableS.ptt, 'ZYX'),
schemaP.tableM.related, schemaP.tableM.routed_flag,
schemaP.tableM.scac, schemaP.tableD.odtc_indicator,
schemaP.tableM.seal_no, schemaP.tableD.line_no,
schemaP.tableM.port_export,
schemaP.tableM.port_unlading, schemaP.tableS.shipnum,
schemaP.tableS.shprnm1, schemaP.tableV.veh_title,
schemaP.tableM.total_value,
schemaP.tableD.odtc_cat_code, schemaP.tableD.unit1,
schemaP.tableD.unit2
FROM schemaP.tableD@link,
schemaP.tableM@link,
schemaP.tableN@link,
schemaP.tableS@link,
schemaP.tableV@link
WHERE tableM.answer IN ('123', '456')
AND SUBSTR (tableS.area, 1, 1) IN ('A', 'S')
AND entity_no IN
('A',
'B',
'C',
'D',
'E',
AND TO_DATE (SUBSTR (tableM.time_stamp, 1, 8), 'YYYYMMDD')
BETWEEN '01-Mar-2009'
AND '31-Mar-2009'
AND tableN.shipment= tableD.shipment(+)
AND tableN.shipment= tableS.shipnum
AND tableN.shipment= tableM.shipment(+)
AND tableN.shipment= tableV.shipment(+)
<SQL>
Edited by: user11263048 on Jun 12, 2009 7:23 AM
Edited by: user11263048 on Jun 12, 2009 7:27 AMCan you change this:
BETWEEN '01-Mar-2009'
AND '31-Mar-2009'To this:
BETWEEN TO_DATE('01-Mar-2009', 'DD-MON-YYYY')
AND TO_DATE('31-Mar-2009','DD-MON-YYYY')That may make no difference but you should never rely on implicit conversions like that, they're always likely to cause you nasty surprises.
If you're still getting the discrepancy, instead of and INSERT-SELECT, can you try a CREATE TABLE AS SELECT... just to see if you get the same result. -
Behaviour of insert ... select statement
Hi,
If we use insert ... select statement like
insert into TableA
Select
from TableB;
and if TableB have 250000 rows then what will be the action for it..
will all 250000 rows fetched to database buffer and if index scans are performed on it and then all rows inserted into TableA
or
it will do it parallel
We are loading large data and facing performance problems for delaying index scans.
Just Curious :)
Rushang.it's not a secret. oracle will perform the select, using indexes if it decides to (depends on source table size, stats, optimizer mode, etc, etc). rows may be pulled back to memory be written to temp space (e.g., if you were returning many, many rows which needed to be grouped). the rows will then be inserted.
so, if you have an index on tableb.col1, then oracle may use the index. or it may do a FTS. either way, it will only select the needed rows to be inserted.
the insert does not prevent the select from working as it would normally. -
Only select statements work (update/insert hang)
Hi, I am running CF MX version 7,0,2,142559 Standard Edition
and ColdFusion is hanging everytime I attempt an insert or update
statement again Oracle 8i and 9i using the jdbc thin driver and an
odbc socket driver.
Select statements work fine. I have tried everything I could
think of and I get the same results. All rights are given to the
datasource and the user. I can do insert and update statement via
another application (Toad) with the same Oracle user.
Any suggestions??? I don't see any hot fixes for this but
that doesn't mean one doesn't exist.
Also, many times it causes the system cpu utlization to stick
at 100% until I restart ColdFusion.
Thanks for any help.Hi,
I had similar results on Oracle 10G while using cfmx 7.02. I
actually updated the macromedia_drivers.jar from the coldfusion
support site.
http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=42dcb10a
An update to the datadirect JDBC drivers. Try that. If not,
make sure you have the latest JDBC drivers from Oracle. Since
previous versions would make the update/insert;s hang.
Maybe you are looking for
-
Reinstall of OS: iTunes Library on external drive not recognized
I've always had my iTunes library on my external drive. Now that I reinstalled my OS iTunes will not open the library. I've navigated to the correct library using the advanced preferences but that's not doing it. I've tried discarding the existing li
-
Updated to Snow Leopard 10.6.7, now wireless won't work?
Today I installed the new Snow Leopard OS X 10.6.7 on my MacBook Pro, and now I cannot connect to wireless networks. I've searched everywhere for answers all night to no avail. Other macbooks and pc's can connect to the network just fine, so it's d
-
Hi, I'm building a website from a photoshop design and I have a problem with the layout I can't see how to fix: I have a header and a footer which both need to stay centered and expand to whatever the browser window width is set to. Between them I ha
-
XI Configuration: Integration Builder..Cannot connect to repository
Dear All, I am facing a strange problem in our XI7 server SP11. When I click on the Tools-->Configuration Wizard in Integration Builder I am getting a error message "Cannot connect to repository " "com.sap.aii.utilxi.misc.api.BaseRuntimeException: Ca
-
HT204478 How do i change the location of a photo in new Photo
I just upgraded to the new photo app on my mac and the locations that it says where the photos were taken is wrong, but I don't see a way to change it. Help please. Thanks. H To clarify the metatag location within the photo, not the location of the p