Insert select
i have an insert statement which wont run:
insert into <table> (value1, value2)
select val1, val2 **
union
select ***
union
select ***
Is there something wrong with this syntax? Should i encompass the select statements within () brackets? The insert does not seem to work.
Dont get one. The query runs for a while and then ends. I know it doesnt work because there is nothing in the table after the insert.
Here is the actual code
insert into final_ziplimit(plan_id, code, zip, radius, provider_count)
select network_hierarchy_id Plan_Id, category_id Code, zip, min(radius), min(Total)
from
select network_hierarchy_id, category_id, zip, radius, sum(prov_num) Total
from
with tree as
(select category_id, specialty_id, subspecialty_id, subsubspecialty_id
from category join specialty using (category_id)
join subspecialty using (specialty_id)
join subsubspecialty using (subspecialty_id)
nhzip as
(select network_hierarchy_id, zip, z.network_id, subsubspecialty_id, radius, prov_num
from network_hierarchy_detail nh, ziprange_loc_spec_join z
where nh.network_id = z.network_id
and z.zip between nh.beginning_zip_code and nh.ending_zip_code
select category_id, specialty_id, subspecialty_id, subsubspecialty_id, network_hierarchy_id, zip, network_id, radius, prov_num
from tree join nhzip using(subsubspecialty_id)
--order by network_hierarchy_id desc
group by network_hierarchy_id, category_id, zip, radius
having sum(prov_num) >= 300
group by network_hierarchy_id, category_id, zip
union
select network_hierarchy_id Plan_Id, specialty_id Code, zip, min(radius), min(Total)
from
select network_hierarchy_id , specialty_id , zip, radius, sum(prov_num) Total
from
with tree as
(select category_id, specialty_id, subspecialty_id, subsubspecialty_id
from category join specialty using (category_id)
join subspecialty using (specialty_id)
join subsubspecialty using (subspecialty_id)
nhzip as
(select network_hierarchy_id, zip, z.network_id, subsubspecialty_id, radius, prov_num
from network_hierarchy_detail nh, ziprange_loc_spec_join z
where nh.network_id = z.network_id
and z.zip between nh.beginning_zip_code and nh.ending_zip_code
select category_id, specialty_id, subspecialty_id, subsubspecialty_id, network_hierarchy_id, zip, network_id, radius, prov_num
from tree join nhzip using(subsubspecialty_id)
--order by network_hierarchy_id desc
group by network_hierarchy_id, specialty_id, zip, radius
having sum(prov_num) >= 300
group by network_hierarchy_id, specialty_id, zip
Similar Messages
-
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
How to insert select columns from one internal table to another
Hi,
How to insert select columns from one internal table to another based on condition as we do from a standart table to internal table.
regards,
SriramHi,
If your question is for copying data from 1 int table to other ;
we can use
APPEND LINES OF it_1 TO it_2.
or if they have different columns then:
loop at it_1 into wa_it1.
move wa_it1-data to wa_it2-d1.
apped wa_it2 to it_2.
clear wa_it2.
endloop.
thnxz -
Smart scan not working with Insert Select statements
We have observed that smart scan is not working with insert select statements but works when select statements are execute alone.
Can you please help us to explain this behavior?There is a specific exadata forum - you would do better to post the question there: Exadata
I can't give you a definitive answer, but it's possible that this is simply a known limitation similar to the way that "Create table as select" won't run the select statement the same way as the basic select if it involves a distributed query.
Regards
Jonathan Lewis -
How to Insert-Select and Updade base table
Hi All,
How is the best way to do this?
I have a table A that has 1million rows.
I need to Insert-Select into table B from a Group by on table A,
and Update all rows in table A that were used into Insert with a flag.
(select a hundred from A and insert 10 thousands into B)
What to do?
1-Update table A with flag before Insert-Select?
2-Insert-Select group by into table B, before Update?
3-Another way?
Thanks in advance,
EdsonEither way. But you may find that updating the source flag first and then using that flag as part of your where clause when extracting rows to put in the destination table is a bit faster.
In any case, you will commit only once after all of the work is done. -
Error when included Order in Insert Select
Dear oracle guru's
I am working with Oracle 10g both forms and database . in one of my forms i try to insert values in a table from other table
the code is
insert into sstab(paramnam) select CP.Parameter_Object.Parameter_Name
from Customer_Parameters cp
order by cp_display_order;
earlier there was no order but to enhance further i added the order by command to the existing Insert-Select. But this threw an error while compiling the module itself .
Encounterd the symbol ORDER when expecting one of the ,;...
Kindly guide me in this regardd
with warm regards
ssrHi,
All DDL operations will automatically commit all the pending transaction.
But in the care of FORMS_DDL, if you are running an insert or update statement, then the pending transactions won't commit. For that you have to pass a DDL statement through FORMS_DDL.
If I am wrong, then please correct me.
Regards,
Manu.
Edited by: Manu. on Jan 7, 2010 5:57 PM -
Insert/select one million rows at a time from source to target table
Hi,
Oracle 10.2.0.4.0
I am trying to insert around 10 million rows into table target from source as follows:
INSERT /*+ APPEND NOLOGGING */ INTO target
SELECT *
FROM source f
WHERE
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
I was having issues with undo and now I am getting the follwing error with temp space
ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
I appriciate any advice on this please.
Thanks,
Ashok902986 wrote:
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
Do you have 10 million in source and very few in target, so most of source will be inserted into target?
Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
And what is the explain plan for your statement?
INSERT /*+ APPEND NOLOGGING */ INTO target
SELECT *
FROM source f
WHERE
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute. -
Need to insert selected records from PROD into TEST
I am trying to insert all records (for a specific region only) from production to test.
I'm a little confused about how the pl/sql would look for this.
Note that as I insert into the table in test, I also have a key field that is auto incremented by 1 each time.
The problem that I am having is I need to link two tables in PROD together to determine the region:
So in test, I want to do something like:
INSERT INTO ACCOUNT_PRICE
(select * from ACCOUNT_PRICE@PROD, MARKETER_ACCOUNT@PROD
where substr(MARKETER_ACCT_NO,1,1) = '3'
and MARKETER_ACCOUNT_NO = PRICE_ACCOUNT_NO);
However, i'm not sure if this is correct or if I should be using a BULK insert.
Note that I cannot just load the whole table as I need to restrict it to only one region of data.
Any help would be appreciated.
SeanDirect load (BULK) is irrelevant to what you are asking. I would strongly suggest that you read the docs about this feature before considering it for any purpose.
As to your question what you are asking is unclear and, for reasons known only to you, you did not include a version number.
So given that I get to invent your version number I have decided you have 10g and therefore you have DataPump so my recommendation is to use
DataPump to extract and export the records you wish to move.
http://www.morganslibrary.org/reference/dbms_datapump.html -
How to improve on insert-select query performance
Hi,
Would like to get some opinion on how to improve this query inside my stored proc.
This insert stmt has run more than 4 hours for inserting around 62k records.
I have identified the bottleneck is in the function within the select stmt.
Could anyone help to finetune?
INSERT INTO STG_PRICE_OUT
( ONPN,
EFFECTIVE_DT,
PRICE_CATENAME,
QUEUE_ID
SELECT P.ONPN, P.EFFECTIVE_DT,
gps_get_catename(P.PART_STATUS ,P.PROGRAM_CD ,P.MARKET_CD),
'1'
FROM PRICE P,
GPS_INV_ITEMS GII
WHERE P.ONPN = GII.ONPN
FUNCTION Gps_Get_Catename
p_status VARCHAR2,
p_pgm VARCHAR2,
p_market VARCHAR2
RETURN VARCHAR2
IS
catename VARCHAR2(30);
BEGIN
SELECT PRICE_CATENAME
INTO catename
FROM PRICE_CATEGORY PC
WHERE NVL(PC.PART_STATUS,' ')= NVL(p_status,' ')
AND NVL(PC.PROGRAM_CD,' ') = NVL(p_pgm,' ')
AND NVL(PC.MARKET_CD,' ') = NVL(p_market,' ')
RETURN catename;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
RETURN NULL;
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE('gps_get_catename: Exception caught!! (' || SQLCODE || ') : ' || SQLERRM);
RETURN catename;
END;
STG_PRICE_OUT has around 1 mil records
GPS_INV_ITEMS has around 140K records
PRICE has around 60k records
INDEX:
STG_PRICE_OUT - INDEX 1(ONPN), INDEX2(ONPN,QUEUE_ID)
GPS_INV_ITEMS - INDEX 3(ONPN)
PRICE - INDEX 4(ONPN)
PRICE_CATEGORY - INDEX 5(PART_STATUS ,PROGRAM_CD ,MARKET_CD)
Thanks and regards,
WHOnly use PL/SQL when you can't do it all in SQL...
INSERT INTO STG_PRICE_OUT
( ONPN,
EFFECTIVE_DT,
PRICE_CATENAME,
QUEUE_ID
SELECT P.ONPN, P.EFFECTIVE_DT,
PC.PRICE_CATENAME,
'1'
FROM PRICE_CATEGORY PC, PRICE P,
GPS_INV_ITEMS GII
WHERE P.ONPN = GII.ONPN
AND PC.PART_STATUS(+) = P.PART_STATUS
AND PC.PROGRAM_CD(+) = P.PROGRAM_CD
AND PC.MARKET_CD(+) = P.MARKET_CD
/Cheers, APC
P.S. You may need to tweak the outer joins - I'm not quite sure what your business rule is. -
Insert, select and create table as give different results
Hi all
I have a strange situation with this three cases:
1) select statement: SELECT (...)
2) insert statement with that same select as above: INSERT INTO SELECT (...)
3) create table statement again with that same select: CREATE TABLE AS SELECT (...)
Each of these cases produce different number of rows (first one 24, second 108 and third 58). What's more the data for second and third case doesn't have any sense with what they should return. The first case returns good results.
One interesting thing is that select uses "UNION ALL" between 2 queries. When simple UNION is used, everything works fine and all three cases return 24 rows. Also if each query is run seaprately, they work fine.
Anyone encountered something like this? (before i create an SR :)
Database is 10.2.0.2 on AIX 5.3. It's a data warehouse.
Edited by: dsmoljanovic on Dec 10, 2008 3:57 PMI understand UNION vs UNION ALL. But that doesn't change the fact that same SELECT should return same set of rows wether in INSERT, CREATE TABLE AS or a simple SELECT.
DB version is 10.2.0.2.
Here is the SQL:
INSERT INTO TMP_TRADING_PROM_BM_OSTALO
select
5 AS VRSTA_PLANIRANJA_KLJUC, u1.UNOS_KLJUC, t1.TRADING_TIP_KLJUC, i1.IZVOR_KLJUC, m1.ITEMNAME AS MJESEC,
l1.PLAN AS IZNOS, l1.TEKUA AS TEKUCA, l1.PROLA AS PROSLA, l1.PLANTEKUA_ AS IZNOS_TEKUCA, l1.PLANPROLA_ AS IZNOS_PROSLA, l1.TEKUAPROLA_ AS TEKUCA_PROSLA, l1.DATUM_UCITAVANJA
from
HR_SP_PLAN.L_12_ET_PROMETI_I_BRUTO_MARZA l1,
select
m1.ITEMIID, m1.ITEMNAME
from
HR_SP_PLAN.L_12_IT_4_MJESECI m1
where
UPPER (m1.ITEMNAME) NOT LIKE '%KVARTAL%' AND UPPER (m1.ITEMNAME) NOT LIKE '%GODINA%' AND UPPER (m1.ITEMNAME) NOT LIKE '%PROC%' and
m1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy')
union all
select -99, null from dual
) m1,
HR_SP_PLAN.L_12_IT_5_SEKTORI l2,
HR_SP_PLAN.L_12_IT_2_TIPOVI_OSTALO l3, HR_SP_PLAN.D_UNOS u1, HR_SP_PLAN.D_TRADING_TIP t1, HR_SP_PLAN.L_12_IT_1_PROMET_I_BM_OSTALO p1, HR_SP_PLAN.D_IZVOR i1
where
l1.ELIST = l2.ITEMIID and
l2.ITEMNAME = u1.UNOS_NAZIV and u1.USER_KLJUC = 12 and l2.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
l1.DIMENSION_1_PROMET = p1.ITEMIID and
p1.ITEMNAME = i1.IZVOR_NAZIV and i1.USER_KLJUC = 12 and p1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
nvl(l1.DIMENSION_4_MJESEC , -99) = m1.ITEMIID and
l1.DIMENSION_2_TIPOVI = l3.ITEMIID and
l3.ITEMNAME = t1.TRADING_TIP_NAZIV and l3.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
l1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
'PROC' = 'PLAN'
union all
select
4 AS VRSTA_PLANIRANJA_KLJUC, u1.UNOS_KLJUC, t1.TRADING_TIP_KLJUC, i1.IZVOR_KLJUC, m1.ITEMNAME AS MJESEC,
l1.PROCJENA AS IZNOS, l1.TEKUA AS TEKUCA, l1.PROLA AS PROSLA, l1.PROCJENATEKUA_ AS IZNOS_TEKUCA, l1.PROCJENAPROLA_ AS IZNOS_PROSLA, l1.TEKUAPROLA_ AS TEKUCA_PROSLA, l1.DATUM_UCITAVANJA
from
HR_SP_PLAN.L_13_ET_PROMETI_I_BRUTO_MARZA l1,
select
m1.ITEMIID, m1.ITEMNAME
from
HR_SP_PLAN.L_13_IT_4_MJESECI m1
where
UPPER (m1.ITEMNAME) NOT LIKE '%KVARTAL%' AND UPPER (m1.ITEMNAME) NOT LIKE '%GODINA%' AND UPPER (m1.ITEMNAME) NOT LIKE '%PROC%' and
nvl(ceil(to_number(m1.ITEMNAME)/3), mod(4, 5)) = mod(4, 5) and m1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy')
union all
select -99, null from dual
) m1,
HR_SP_PLAN.L_13_IT_5_SEKTORI l2, HR_SP_PLAN.L_13_IT_2_TIPOVI_OSTALO l3,
HR_SP_PLAN.D_UNOS u1, HR_SP_PLAN.D_TRADING_TIP t1,
HR_SP_PLAN.L_13_IT_1_PROMET_I_BM_OSTALO p1, HR_SP_PLAN.D_IZVOR i1
where
l1.ELIST = l2.ITEMIID and
l2.ITEMNAME = u1.UNOS_NAZIV and u1.USER_KLJUC = 13 and l2.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
l1.DIMENSION_1_PROMET = p1.ITEMIID and
p1.ITEMNAME = i1.IZVOR_NAZIV and i1.USER_KLJUC = 13 and p1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
nvl(l1.DIMENSION_4_MJESEC , -99) = m1.ITEMIID and
l1.DIMENSION_2_TIPOVI = l3.ITEMIID and
l3.ITEMNAME = t1.TRADING_TIP_NAZIV and l3.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
l1.DATUM_UCITAVANJA = to_date('24.11.2008','dd.mm.yyyy') and
'PROC' = 'PROC'; -
Insert select statement is taking ages
Sybase version: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008
Hi guyz,
I have a question about the performance of a statement that is very slow and I'd like to have you input.
I have the SQL statement below that is taking ages to execute and I can't find out how to imporve it
insert SST_TMP_M_TYPE select M_TYPE from MKT_OP_DBF M join TRN_HDR_DBF T on M.M_ORIGIN_NB=T.M_NB where T.M_LTI_NB=@Nson_lti
@Nson_lti is the same datatype as T.M_LTI_NB
M.M_ORIGIN_NB=T.M_NB have the same datatype
TRN_HDR_DBF has 1424951 rows and indexes on M_LTI_NB and M_NB
table MKT_OP_DBF has 870305 rows
table MKT_OP_DBF has an index on M_ORIGIN_NB column
Statistics for index: "MKT_OP_ND7" (nonclustered)
Index column list: "M_ORIGIN_NB"
Leaf count: 3087
Empty leaf page count: 0
Data page CR count: 410256.0000000000000000
Index page CR count: 566.0000000000000000
Data row CR count: 467979.0000000000000000
First extent leaf pages: 0
Leaf row size: 12.1161512343373872
Index height: 2
The representaion of M_ORIGIN_NB is
Statistics for column: "M_ORIGIN_NB"
Last update of column statistics: Mar 9 2015 10:48:57:420AM
Range cell density: 0.0000034460903826
Total density: 0.0053334921767125
Range selectivity: default used (0.33)
In between selectivity: default used (0.25)
Histogram for column: "M_ORIGIN_NB"
Column datatype: numeric(10,0)
Requested step count: 20
Actual step count: 20
Step Weight Value
1 0.00000000 < 0
2 0.07300889 = 0
3 0.05263098 <= 5025190
4 0.05263098 <= 9202496
5 0.05263098 <= 12664456
6 0.05263098 <= 13129478
7 0.05263098 <= 13698564
8 0.05263098 <= 14735554
9 0.05263098 <= 15168461
10 0.05263098 <= 15562067
11 0.05263098 <= 16452862
12 0.05263098 <= 16909265
13 0.05263212 <= 17251573
14 0.05263098 <= 18009609
15 0.05263098 <= 18207523
16 0.05263098 <= 18404113
17 0.05263098 <= 18588398
18 0.05263098 <= 18793585
19 0.05263098 <= 18998992
20 0.03226340 <= 19574408
If I look at the showplan, I can see indexes on TRN_HDR_DBF are used but now the one on MKT_OP_DBF
QUERY PLAN FOR STATEMENT 16 (at line 35).
STEP 1
The type of query is INSERT.
The update mode is direct.
FROM TABLE
MKT_OP_DBF
M
Nested iteration.
Table Scan.
Forward scan.
Positioning at start of table.
Using I/O Size 32 Kbytes for data pages.
With LRU Buffer Replacement Strategy for data pages.
FROM TABLE
TRN_HDR_DBF
T
Nested iteration.
Index : TRN_HDR_NDX_NB
Forward scan.
Positioning by key.
Keys are:
M_NB ASC
Using I/O Size 4 Kbytes for index leaf pages.
With LRU Buffer Replacement Strategy for index leaf pages.
Using I/O Size 4 Kbytes for data pages.
With LRU Buffer Replacement Strategy for data pages.
TO TABLE
SST_TMP_M_TYPE
Using I/O Size 4 Kbytes for data pages.
I was expecting the query to use the index also on MKT_OP_DBF
Thanks for your advices
SimonThe total density number for the MKT_OP_DBF.M_ORIGIN_NB column don't look very good:
Range cell density: 0.0000034460903826
Total density: 0.0053334921767125
Notice the total density value is 3 magnitudes larger than the range density ... which can indicate a largish number of duplicates. (NOTE: This wide difference between range cell and total density can be referred to as 'skew' - more on this later.)
Do some M_ORIGIN_NB values have a large number of duplicates? What does the following query return:
=====================
select top 30 M_ORIGIN_NB, count(*)
from MKT_OP_DBF
group by M_ORIGIN_NB
order by 2 desc, 1
=====================
The total density can be used to estimate the number of rows expected for a join (eg, TRN_HDR_DBF --> MKT_OP_DBF). The largish total density number, when thrown into the optimizer's calculations, may be causing the optimizer to think that the volume of *possible* joins will be more expensive than a join in the opposite direction (MKT_OP_DBF --> TRN_HDR_DBF) which in turn means (as Jeff's pointed out) that you end up table scanning MKT_OP_DBF (as the outer table) because of no SARGs.
From your description it sounds like you've got the necessary indexes to support a TRN_HDR_DBF --> MKT_OP_DBF join order. (Though it wouldn't hurt to see the complete output from sp_helpindex for both tables just to make sure we're on the same sheet of music.)
Without more details (eg, complete stats for both tables, sp_help for both tables - if you decide to post these I'd recommend posting them as a *.txt attachment).
I'm assuming you *know* that a join from TRN_NDR_DBF --> MKT_OP_DBF should be much quicker than what you're currently seeing. If this is the case, I'd probably want to start with:
=====================
exec sp_modifystats MKT_OP_DBF, M_ORIGIN_NB, REMOVE_SKEW_FROM_DENSITY
go
exec sp_recompile MKT_OP_DBF
go
-- run your query again
=====================
By removing the skew from the total density (ie, set total density = range cell density = 0.00000344...) you're telling the optimizer that it can expect a much smaller number of joins for the join order of TRN_HDR_DBF --> MKT_OP_DBF ... and that may be enough for the optimizer to use TRN_HDR_DBF to drive the query.
NOTE: If sp_modifystats/REMOVE_SKEW_FROM_DENSITY provides the desired join order, keep in mind that you'll need to re-issue this command after each update stats command that modifies the stats on the M_ORIGIN_NB column. For example, modify your update stats maintenance job to issue sp_modifystats/REMOVE_SKEW_FROM_DENSITY for those special cases where you know it helps query performance. -
Insert select statement or insert in cursor
hi all,
i need a performance compare for: insert operation in cursor one by one and select statement insert,
for example:
1. insert into TableA (ColumA,ColumnB) select A, B from TableB;
2. cursor my_cur is select A, B from TableB
for my_rec in my_cur loop
insert into TableA (ColumA,ColumnB) values (my_rec.A, my_rec.B);
end loop;
also "bulk collect into" can be used.
Which one has a better performance?
Thanks for your help,
kadriyeWhat's stopping you from making 100,000 rows of test data and trying it for yourself?
Edit: I was bored enough to do it myself.
Starting insert as select 22-JUL-08 11.43.19.544000000 +01:00
finished insert as select. 22-JUL-08 11.43.19.825000000 +01:00
starting cursor loop 22-JUL-08 11.43.21.497000000 +01:00
finished cursor loop 22-JUL-08 11.43.35.185000000 +01:00
The two second gap between the two is for the delete.
Message was edited by:
Dave Hemming -
Help in query required – Insert, Select in same table
Hi All
I need your help on writing the queries effectively.
Oracle Version: 10.2.0.3.0
OS: UNIX
I have a table METRICS_TBL as mentioned below.
CYCLE_DATE METRICS VALUE
08/17/2008 COST-TV 100
08/17/2008 COST-NEWSPAPER 50
08/17/2008 COST-POSTALMAIL 25
08/17/2008 PROD-TV 10
08/17/2008 PROD-NEWSPAPER 25
08/17/2008 PROD-POSTALMAIL 5
Based on the above data, I need to append (Insert into METRICS_TBL select from METRICS_TBL) the same table with the records as mentioned below.
08/17/2008 COSTPERPROD-TV 10
08/17/2008 COSTPERPROD-NEWSPAPER 2
08/17/2008 COSTPER PROD-POSTALMAIL 5
Basically, I need to calculate Cost per Product for each category. Depending upon the available metrics, metrics also should be changed like COSTPERPROD and values should be Cost/prod under each category.
Can somebody help me with the query.
ThanksSQL> WITH t AS
2 (
3 SELECT TO_DATE('8/17/2008', 'MM/DD/YYYY') AS CYCLE_DATE, 'COST-TV' AS METRICS, 100 AS VALUE
4 FROM DUAL
5 UNION ALL
6 SELECT TO_DATE('08/17/2008', 'MM/DD/YYYY'), 'COST-NEWSPAPER', 50
7 FROM DUAL
8 UNION ALL
9 SELECT TO_DATE('08/17/2008', 'MM/DD/YYYY'), 'COST-POSTALMAIL', 25
10 FROM DUAL
11 UNION ALL
12 SELECT TO_DATE('08/17/2008', 'MM/DD/YYYY'), 'PROD-TV', 10
13 FROM DUAL
14 UNION ALL
15 SELECT TO_DATE('08/17/2008', 'MM/DD/YYYY'), 'PROD-NEWSPAPER', 25
16 FROM DUAL
17 UNION ALL
18 SELECT TO_DATE('08/17/2008', 'MM/DD/YYYY'), 'PROD-POSTALMAIL', 5
19 FROM DUAL)
20 SELECT COST.CYCLE_DATE, 'COSTPERPROD-' || SUBSTR(COST.metrics, 6) AS Metrics,
21 COST.VALUE / prod.VALUE AS COSTPERPROD
22 FROM t COST, t prod
23 WHERE COST.CYCLE_DATE = PROD.CYCLE_DATE
24 AND COST.metrics LIKE 'COST-%'
25 AND prod.metrics LIKE 'PROD-%'
26 AND SUBSTR(COST.metrics, 6) = SUBSTR(prod.metrics, 6)
27 /
CYCLE_DA METRICS COSTPERPROD
17.08.08 COSTPERPROD-NEWSPAPER 2
17.08.08 COSTPERPROD-POSTALMAIL 5
17.08.08 COSTPERPROD-TV 10 -
Inserting Selection Criteria in Query Output
Hello guys
How can I insert a "Selection Criteria" for a query in its output. The final outcome should be:
1. The user runs the query.
2.The Selection Criteria pops up. The user enter the values. Then executes the query.
3. The query is displayed:
a. It has the Free Charcteristics in the left hand side top corner.
b. The query results below it.
c. And on the top, in the middleof the query, over the results area, the selction criteria should be displayed like:
<b> Plant : 0353
Free Material : Empty Demarcation
Characteristics Fiscal year / Period : 001/2007
Query Output</b>
Can this be done in BEx? Do i need a create a workbook? how do i do this?
Thanks.Hi Prasad,
Create a workbook so that you will be able to place results at defined locations.
For displaying all selection criteria in the workbook choose layout > Display Text Elements . Delete the text elements that you do not wish to display.
Jaya -
Reason for Deadlock Insert Select on same table
I have obtained the deadlock graph. However, I still don't understand why a deadlock occurs. Can someone
explain it to me? Thanks in advance.
deadlock-list
deadlock victim=process3a59438
process-list
process id=process3a58c58 taskpriority=0 logused=1093420 waitresource=PAGE: 60:1:1113 waittime=203 ownerId=245203560 transactionname=implicit_transaction lasttranstarted=2014-05-06T09:46:41.930 XDES=0xbe3bd8370 lockMode=IX
schedulerid=9 kpid=11368 status=suspended spid=223 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2014-05-06T09:46:55.933 lastbatchcompleted=2014-05-06T09:46:55.933 clientapp=jTDS hostname=CINAM1103 hostpid=123 loginname=clienta isolationlevel=read
committed (2) xactid=245203560 currentdb=60 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128058
executionStack
frame procname=adhoc line=1 stmtstart=320 sqlhandle=0x0200000013d63b16b7180b66ed9196aa2502a611a28bac73
insert into [TableA] (version_id, tuple_signature, start_time_member_id, end_time_member_id, member_list, type_cd, delta, ordinal, dollar_value, delta_id) values ( @P0 , @P1 , @P2 , @P3 , @P4 , @P5
, @P6 , @P7 , @P8 , @P9 )
inputbuf
(@P0 nvarchar(4000),@P1 nvarchar(4000),@P2 nvarchar(4000),@P3 nvarchar(4000),@P4 nvarchar(4000),@P5 int,@P6 float,@P7 int,@P8 nvarchar(4000),@P9 nvarchar(4000))insert into [TableA] (version_id, tuple_signature, start_time_member_id,
end_time_member_id, member_list, type_cd, delta, ordinal, dollar_value, delta_id) values ( @P0 , @P1 , @P2 , @P3 , @P4 , @P5 , @P6 , @P7 , @P8 , @P9 )
process id=process3a59438 taskpriority=0 logused=0 waitresource=PAGE: 60:1:11867 waittime=703 ownerId=245205763 transactionname=SELECT lasttranstarted=2014-05-06T09:46:55.407 XDES=0x45b5132b0 lockMode=S schedulerid=9
kpid=10300 status=suspended spid=243 sbid=0 ecid=2 priority=0 transcount=0 lastbatchstarted=2014-05-06T09:46:55.407 lastbatchcompleted=2014-05-06T09:46:54.783 clientapp=jTDS hostname=CINAM1103 hostpid=123 isolationlevel=read committed (2) xactid=245205763
currentdb=60 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056
executionStack
frame procname=adhoc line=1 stmtstart=40 sqlhandle=0x020000002811a70d11559b907ff33e99750ba56c92d1db68
select deltadefin0_.delta_id as delta1_38_, deltadefin0_.version_id as plan2_38_, deltadefin0_.tuple_signature as tuple3_38_, deltadefin0_.start_time_member_id as start4_38_, deltadefin0_.end_time_member_id as end5_38_,
deltadefin0_.member_list as member6_38_, deltadefin0_.type_cd as type7_38_, deltadefin0_.delta as delta38_, deltadefin0_.ordinal as ordinal38_, deltadefin0_.dollar_value as dollar10_38_ from [TableA] deltadefin0_ where deltadefin0_.version_id= @P0
inputbuf
resource-list
pagelock fileid=1 pageid=1113 dbid=60 objectname=aa_core_clienta_totalga_q1_i01_p.dbo.TableA id=lock8ad856080 mode=S associatedObjectId=72057594040680448
owner-list
owner id=process3a59438 mode=S
waiter-list
waiter id=process3a58c58 mode=IX requestType=wait
pagelock fileid=1 pageid=11867 dbid=60 objectname=aa_core_clienta_totalga_q1_i01_p.dbo.TableA id=lock8b3af0a80 mode=IX associatedObjectId=72057594040680448
owner-list
owner id=process3a58c58 mode=IX
waiter-list
waiter id=process3a59438 mode=S requestType=waitprocess 3a58c58 was running an insert and had an intent exclusive lock on page 11867 but also needed one on 1113 unfortunately process 3a59438 had a shared lock on that same page and wanted to push a shared lock on page 11867. Each held the resource the
other wanted which is a classic "deadlock" situation. Since the way victims are determined in SQL Server is based on transaction log usage, the select (process 3a59438) was the victim and killed. Shared locks and Intent Exclusive locks are not compatible.
It's interesting to note that the insert process had 2 open transactions. If this is a problem I'd have the software company that makes it look into their processes and make the transactions shorter or opt for some type of optimistic currency instead of
pessimistic.
The trancount often shows two when there's only one transaction explicitly opened on the connection (grouping the two inserts), the second is simply the execution of the current statement.
But for the rest, there are two questions any developer has to ask. First, why is a select in the default read-committed isolation taking locks at all and then second, why doesn't it take whatever locks it wants atomically, because apparently it took
one, was interrupted, then tried to take the second, thus getting lost in the deadlock.
And then third - what can they *do* about it? Breaking the transaction in the first process does not seem relevant.
Josh
ps - the answer, "well try to make everything go faster with the right index for the select etc" is something, but is it enough?
Maybe you are looking for
-
Free (?) upgrade because Acrobat 7.0 professional won't install in Windows 7
I have Acrobat 7.0 Professional running under Windows XP pro with no problem. I understand from user forums that this version does not run under Windows 7. Does Adobe offer me a free upgrade to a version compatible with Windows 7 or am I forced to bu
-
Streaming music from my iPad to my iPod touch
Is there a way to stream music from my iPad to my iPod touch? I have an iPod touch that I never use anymore since I got the iPad; so connecting the iPod to my stereo would be the best solution for me.
-
Get error -17308 on "End" statement using remote execution in TestStand 4.0.1
We recently upgraded TestStand 3.5 to 4.0.1. We have a test that has sequences remotely execute on another computer. These sequences call other sub sequences. The test worked fine on TestStand 3.5. Under the new version of TestStand, when one of t
-
Is it possible to install PVM without DHCP in local network?
Hi! We have main DHCP server in our company and i don't have access to it. So, it's difficult to install PVM from Network or i don't understand how to do in. I created kickstart file and shared OEL DVD, also i configured VM as: uuid = '0004fb00-0006-
-
Define own colors for charts in BW 3.0b, is it possible???
<u><i><b>Referes to BW 3.0b</b></i></u> I have another short question concerning graphs. Bex gives a fixed set of optional colors for charts, however I need some other colors in order to fit with the given style guide. I know that it is possible to u