Optimize SQL query
Hi All,
Please help me to optimize my below SQL query :
delete from BATCH_REQUEST_RESPONSE where (SOState='ResponseReceived' or SOState='Header’ or SOState='InvalidSO’) and ACKREC='Y' and SourceAgentId=? and BATCH_REQUEST_RESPONSE.TransactionId NOT EXISTS (select transactionid from SORECORD)
Note: Result of (select transactionid from SORECORD ) could be till 100 K
Regards,
Dheeraj
Maybe - I'm not sure what you're after
delete from batch_request_response brr
where sostate in ('ResponseReceived','Header','InvalidSO')
and ackrec = 'Y'
and sourceagentid = :agent_id
and not exists(select null
from sorecord
where transactionid = brr.transactionid
delete from batch_request_response brr
where sostate in ('ResponseReceived','Header','InvalidSO')
and ackrec = 'Y'
and sourceagentid = :agent_id
and 0 = (select count(*)
from sorecord
) /* if sorecord must be empty */
Regards
Etbin
Similar Messages
-
What is the best way to Optimize a SQL query : call a function or do a join?
Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?
Hi,
If you're even considering a join, then it will probably be faster. As Justin said, it depends on lots of factors.
A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
You might choose to have a user-defined function even though you could get the same result with a join. That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case. -
SQL query optimization... Any idea?
Hi all.
Our DB (Oracle 9i RAC) is handling the prepaid service of a GSM mobile operator.
We're having a query running every night to list the used vouchers from a table containing the vouchers data.
Here is the query:
select ticketno||' '||serialid||' '||used||' '||vouchervalue||' '||usedby from smv_avoucher where state='2' and trunc(used)=trunc(sysdate-1);
As you can see we scan the entire table for used vouchers (state='2') for the previous day (sysdate-1). The 'used' column contains the date the voucher was used.
Can this query be optimized? How can you improve this very simple query, or make it nicer for the DB?
The reason we are trying to optimize this query is that it takes a long time to execute and generates "snapshot too old" error messages. We could use a large rollback segment for it, but first we would like to find out if the query can be optimized or not.
Thank you for your insights.Thank you for your answers.
What is the execution plan of this query? Can you post it?
Operation Object
SELECT STATEMENT ()
TABLE ACCESS (FULL) SMV_AVOUCHER
How many records this table contains?About 25 million records
Do you have any indexes on smv_avoucher?Yes we do have several indexes but not on 'used'
Also you have to make sure you have most current statistics collected.Sorry would you mind to clarify this? I am not sure I understand fully.
It seems to me that this query does full table scan, since you use trunc(used) (unless you have Function Based Index on trunc(used)).It does indeed.
If you have index on used, it won't be used since you use function against the column. Oracle would use full table scan instead. I get it. Thanks for this information.
I assume this table is and the data is frequently changed. These circumstances may lead to "snapshot too old".The table is updated very frequently (subscribers use vouchers to recharge their prepaid phones and new vouchers are provisioned into the DB all the time). This is a 5 million subscribers network.
My initial suggestion would be to get rid of trunc(used)=trunc(sysdate-1) and to replace it with something like used between trunc(sysdate-1) and trunc(sysdate).Of course you have to have index on used.
I will create this index and try this again.
About column state ... How many distinct values it has?5 distinct values only.
Might be a good candidate for IOTUnfortunately we are not at liberty to make such changes. Thanks for the suggestion though.
Regards -
SQL query optimization by changinf the WHERE clause
Hi all,
I need to know about the SQL query performance improvement by changing the WHERE clause. Consider a query :
select * from student where country ='India' and age = 20
Say, country = 'India' filters 100000 records and age = 20 filters 2000 records if given one by one. Now can anyone tell if the performance of the query can be changed by changing the query like this :
select * from student where age = 20 and country ='India'
as first where clause will give 2000 results and next filter will be applicable on only 2000 rows. While in the former query first where clause would give 100000 rows and seconde filter, hence, would be applicable on 100000 rows???
Kindly explain.
Thanks in advance.
Abhideepin general the order of the where condition should not be important. However there are a few exeptions where sometimes it might play a role. Sometimes this is called order of filter conditions. Among others it depends on RBO or CBO used, Oracle Version, Indexes on the columns, statistic information on the table and the columns, CPU statistics in place etc.
If you want to make this query fast and you know that the age column has much better selectivity then you can simply put an index on the age column. An index on the country column is probably not useful at all, since to little different values are in this column. If you are already in 11g I would suggest to use a composite index on both columns with the more selective in first position.
Edited by: Sven W. on Nov 17, 2008 2:23 PM
Edited by: Sven W. on Nov 17, 2008 2:24 PM -
SQL Query - store the result for optimization?
Good day experts,
I am looking for advice on a report. I did a lot of analytic functions to get core data that I need to make my report and its takes around 50 min for SQL to complete. Now with this data I need to create 3 different reports and I cant use the same SQL since there is a lot of agregation (example would be group by product in one case and by client in 2nd). For each of those different group bys I need a different report.
So how to create 3 reports from 1 SQL query without running the query 3 times?
First thing that comes to mind is to store the result set into a dummy table and then query the table since the core data I get is around 300 rows and then do different group bys.
Best regards,
IgorSo how to create 3 reports from 1 SQL query without running the query 3 times?
You already know the obvious answer - store the data 'somewhere'.
The appropriate 'somewhere' depends on your actual business requirements and you did not provide ALL of them.
MV - if the query is always the same you could use an MV and do a complete refresh when you want new data. The data is permanent and can be queried by other sessions but the query that accesses the data will be frozen into the MV definition.
GTT (global temp table) - if a NEW data load AND the three reports will ALWAYS be executed by a single session and then the data is NOT needed anymore then a GTT can work. The query that loads the GTT can be different for each run but the data will only be available for a single session and ONLY for the life of that session. So if anything goes wrong and the session terminates the data is gone.
First thing that comes to mind is to store the result set into a dummy table and then query the table since the core data I get is around 300 rows and then do different group bys.
That is commonly referred to as a 'REPORT-READY table'. Those are useful when the data needs to be permanent and available to multiple sessions/users. Typically there is a batch process (e.g. package procedure) that periodically refreshes/updates the data during an outage window. Or the table can have a column (e.g. AS_OF) that lets it contain multiple sets of data and the update process leaves existing data alone and creates a new set of data.
If your core data is around 300 rows you may want to consider a report-ready table and even using it to contain multiple sets of data. Then the reports can be written to query the data using an AS_OF value that rolls up and returns the proper data. You don't need an outage window since older data is always available (but can be deleted when you no longer need it.
If you only need one set of data you could use a partitioned work table (with only one partition) to gather the new set of data and then an EXCHANGE PARTITION to 'swap' in the new data. That 'exchange' only takes a fraction of a second and avoids an outage window. Once the swap is done any user query will get the new data. -
How I can optimize this SQL query
I require your help, I want to know how I can optimize this query
SELECT
"F42119". "SDLITM" as "Code1"
"F42119". "SDAITM" as "Code2"
"F42119". "SDDSC1" as "Product"
"F42119". "SDMCU" as "Bodega"
Sum ("F42119". "SDSOQS" / 10000) as "Number",
Sum ("F42119". "SDUPRC" / 10000) as "preciou"
Sum ("F42119". "SDAEXP" / 100) as "Value",
Sum ("F42119". "SDUNCS" / 10000) as "CostoU"
Sum ("F42119". "SDECST" / 100) as "Cost"
"F4101". "IMSRP1" as "Division"
"F4101". "IMSRP2" as "classification",
"F4101". "IMSRP8" as "Brand"
"F4101". "IMSRP9" as "Aroma"
"F4101". "IMSRP0" as "Presentation"
"F42119". "SDDOC" as "Type",
"F42119". "SDDCT" as "Document",
"F42119". "SDUOM" as "Unit"
"F42119". "SDCRCD" as "currency"
"F0101". "ABAN8" as "ABAN8"
"F0101". "ABALPH" as "Customer"
"F0006". "MCRP22" as "Establishment"
from "PRODDTA". "F0101" "F0101"
"PRODDTA". "F42119" "F42119"
"PRODDTA". "F4101" "F4101"
"PRODDTA". "F0006" "F0006"
where "F42119". "SDAN8" = "F0101". "ABAN8"
and "F0006". "MCMCU" = "F42119". "SDMCU"
and "F4101". "IMITM" = "F42119". "SDITM"
and "F42119". "SDDCT" in ('RI', 'RM', 'RN')
and CAST (EXTRACT (MONTH FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ') + substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in : Month
and CAST (EXTRACT (YEAR FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ')+ Substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in: Year
and trim ("F0006". "MCRP22") =: Establishment
and trim ("F4101". "IMSRP1") =: Division
Group By "F42119". "SDLITM"
"F42119". "SDAITM"
"F42119". "SDDSC1"
"F4101". "IMSRP1"
"F42119". "SDDOC"
"F42119". "SDDCT"
"F42119". "SDUOM"
"F42119". "SDCRCD"
"F0101". "ABAN8"
"F0101". "ABALPH"
"F4101". "IMSRP2"
"F4101". "IMSRP8"
"F4101". "IMSRP9"
"F4101". "IMSRP0"
"F42119". "SDMCU"
"F0006". "MCRP22"
I appreciate the help you can give meIt seems to me that part of fixing it could be how you join the tables.
Instead of the humongous where clause, put the applicable conditions on the join.
You have
from "PRODDTA". "F0101" "F0101"
"PRODDTA". "F42119" "F42119"
"PRODDTA". "F4101" "F4101"
"PRODDTA". "F0006" "F0006"
where "F42119". "SDAN8" = "F0101". "ABAN8"
and "F0006". "MCMCU" = "F42119". "SDMCU"
and "F4101". "IMITM" = "F42119". "SDITM"
and "F42119". "SDDCT" in ('RI', 'RM', 'RN')
and CAST (EXTRACT (MONTH FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ') + substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in : Month
and CAST (EXTRACT (YEAR FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ')+ Substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in: Year
and trim ("F0006". "MCRP22") =: Establishment
and trim ("F4101". "IMSRP1") =: Division
INSTEAD try something like
from JOIN "PRODDTA". "F0101" "F0101" ON "F42119". "SDAN8" = "F0101". "ABAN8"
JOIN "PRODDTA". "F42119" "F42119" ON "F0006". "MCMCU" = "F42119". "SDMCU"
JOIN "PRODDTA". "F4101" "F4101" ON join condition
JOIN "PRODDTA". "F0006" "F0006" ON join condition.
Not sure exactly how you need things joined, but above is the basic idea. Remove criteria for joining the tables from the WHERE clause and put them
in the join statements. That might clean things up and make it more efficient. -
URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query
Hi Everyone,
When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
UPDATE TABLE_A a
SET a.current_commit_date =
(SELECT MAX (b.loading_date)
FROM TABLE_B b
WHERE a.sales_order_id = b.sales_order_id
AND a.sales_order_line_id = b.sales_order_line_id
AND b.confirmed_qty > 0
AND b.data_flag IS NULL
OR b.schedule_line_delivery_date >= '23 NOV 2008')
Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
Please please help me in optimizing this.
Thanks,
SudhindraCheck the instruction again, you're leaving out information we need in order to help you, like optimizer information.
- Post your exact database version, that is: the result of select * from v$version;
- Don't use TOAD's execution plan, but use
SQL> explain plan for <your_query>;
SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
It's also explained in the instruction.
When was the last time statistics were gathered for table_a and table_b?
You can find out by issuing the following query:select table_name
, last_analyzed
, num_rows
from user_tables
where table_name in ('TABLE_A', 'TABLE_B');
Can you also post the results of these counts;select count(*)
from table_b
where confirmed_qty > 0;
select count(*)
from table_b
where data_flag is null;
select count(*)
from table_b
where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy'); -
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
SQL query with parallel hint running very slow
I have a SQL query which joins three huge tables. (given below)
insert /*+ append */ into final_table (oid, rmeth, id, expdt, crddt, coupon, bitfields, processed_count)
select /*+ full(t2) parallel(t2,31) full(t3) parallel(t3,31)*/
seq_final_table.nextval, '200', t2.id, t3.end_date, '1/jul/2009',123,t2.bitfield, 0
from table1 t1, table2 t2, table3 t3 where
t1.id=t2.id and
t2.pid=t3.pid and
t2.vid=t3.vid and
t3.end_date is not null and
(trunc(t1.expiry_date) != trunc(t3.end_date) or trim(t1.expiry_date) is null);
Below are some statistics of the three tables.
Table_Name RowCount Size(MB)
table1 36469938 532
table2 242172205 39184
table3 231756758 29814
The above query ran for 30+ hours, and returned with no rows inserted into final_table. I didn't get any error message also.
But when I ran the query with table1 containing just 10000 records, the query completed succesfully within 20 minutes.
Can any one please optimize the above query?
Edited by: jaysara on Aug 18, 2009 11:51 PMAs a side note: You probably don't want to insert a string into a date field, won't you?
Under the assumption that crddt is of datatype date:
crddt='1/jul/2009' needs to be changed into
crddt= to_date('01/07/2009','dd/mm/yyyy') This is data type correct and nls independent. -
Need help in Report From SQL Query
Hi All,
I am facing a problem with a report. I need your help.
I am creating a Report From SQL Query (Portal) with some arguments passed at runtime. I am able to view the output, if the query returns few rows ( arount 1000 rows). But for some inputs it needs to generate >15000 records, at this point the page is getting time out (i think!) and showing error page. I am able to execute query from the SQL Plus console ot using TOAD editor. Here the query is not taking more that 2 mins time to show the result.
If i am executing from Portal i observed that, once i give the appropriate input and hit submit button a new oracle process is getting created for the query on UNIX (I am usign "TOP" command to check processes). The browser page will be shown error page after 5 minutes (i am assuming session time out!) , but on the backend the process will be executed for more than 30 mins.
I tried also increase the page time out in httpd.conf, but no use.
The data returned as a result of the query is sized more than 10 MB. Is caching this much data is possible by the browser page? is the returned data is creating any problem here.
Please help me to find appropriate reasone for the failure?user602513 wrote:
Hi All,
I am facing a problem with a report. I need your help.
I am creating a Report From SQL Query (Portal) with some arguments passed at runtime. I am able to view the output, if the query returns few rows ( arount 1000 rows). But for some inputs it needs to generate >15000 records, at this point the page is getting time out (i think!) and showing error page. I am able to execute query from the SQL Plus console ot using TOAD editor. Here the query is not taking more that 2 mins time to show the result.
If i am executing from Portal i observed that, once i give the appropriate input and hit submit button a new oracle process is getting created for the query on UNIX (I am usign "TOP" command to check processes). The browser page will be shown error page after 5 minutes (i am assuming session time out!) , but on the backend the process will be executed for more than 30 mins.
I tried also increase the page time out in httpd.conf, but no use.
The data returned as a result of the query is sized more than 10 MB. Is caching this much data is possible by the browser page? is the returned data is creating any problem here.
Please help me to find appropriate reasone for the failure?Do you get any errors or warnings or it is just the slow speed which is the issue?
There could be a variety of reasons for the delayed processing of this report. That includes parameter settings for that page, cache settings, network configurations, etc.
- explore best optimization for your query;
- evaluate portal for best performance configuration; you may follow this note (Doc ID: *438794.1* ) for ideas;
- third: for that particular page carrying that report, you can use caching wisely. browser cache is neither decent for large files, nor practical. instead, explore the page cache settings that portal provides.
- also look for various log files (application.log and apache logs) if you are getting any warnings reflecting on some kind of processing halt.
- and last but not the least: if you happen to bring up a portal report with more than 10000 rows for display then think about the usage of the report. Evaluate whether that report is good/useful for anything?
HTH
AMN -
OutOfMemory error while executing sql query
Hello!
My program gets multiple datas from database in every ten minutes, and stores them in memory for hundreds of users, requesting datas via web-interface simoultaneously.
I dont have access to change database structures, write stored procedures, etc, just read from db.
There is a table in database with lot of million rows, and sometimes when I try to execute a SELECT on this table it takes minutes to get back the result.
To avoid waiting for database server for a long time, I set querytimeout to 30 seconds.
If the server throws back the execution with Query Timed out Exception, I want to 'forget' this data, and 0 value is acceptable because of fast run is more important. So I put the boolean broken variable to check if there is any problem with db server.
The size of the used memory is about 150Mb if things going well, but I set the max heap to 512 MB, just in case anything happens.
I'm logging all threads stacktrace, and free/used/allocated memory size in every 5 seconds.(threadwatching.log) 2.appendix
Sometimes, not in every case (I dont know what is this depends on), when I get the next phase of refreshing cached datas (you can see it below), the process reaches the fiorst checkpoint (signed in code below), starts to execute the sql query, and never reaches the second checkpoint , but used memory growing 50-60 Mb-os in every 5 seconds, as I can see in threadwatching.log until it reaches the max memory and throws OutOfMemory error: java heap space.
I'm using DbConnectionBroker for connection pooling, SQLCommandBean for handling Statements, PreparedStatements, etc, and jTDS jdbc connector.
SQLCommandBean closes statements, resultsets, so these objects doesnt stays open.
I cant figured out what causes the memory leak, if someone have an idea, please help me.
1. Part of the cached data refreshing (DataFactory.createPCVPPMforSiemens()):
PCVElement element = new PCVElement(m, ProcessControlView.PPM);
String s = DateTime.getDate(interval.getStartDate());
boolean broken=false;
int value = 0;
for (int j = 0; j < 48; j++) {
try {
if (!broken) {
d1 = DateTime.getDate(new Date(start + ((j + 1) * 600000)));
sqlBean = new SQLCommandBean();
conn = broker.getConnection();
sqlBean.setConnection(conn);
sqlBean.setQueryTimeOut(30);
System.out.println(DateTime.getDate(new Date())+" "+m.getName()+" "+j);// first checkpoint
value = SiemensWorks.getPCVPPM(sqlBean, statId, s, d1);
System.out.println(DateTime.getDate(new Date())+" "+m.getName()+" "+j);// second checkpoint
} else value=0;
} catch (Exception ex) {
System.out.println("ERROR: DataFactory.createPCVPPMforSiemens 1 :" + ex.getMessage());
ex.printStackTrace();
value = 0;
broken=true;
} finally {
try {
broker.freeConnection(conn);
} catch (Exception ex) {}
element.getAvgValues()[j] = value;
}2. SiemensWorks.getPCVPPM()
public static int getPCVPPM(SQLCommandBean sqlBean,int statID,String start,String end)
throws SQLException, UnsupportedTypeException, NoSuchColumnException {
sqlBean.setSqlValue(SiemensSQL.PCV_PPM);
Vector values=new Vector();
values.add(new StringValue(statID+""));
values.add(new StringValue(start));
values.add(new StringValue(end));
sqlBean.setValues(values);
Vector rows=sqlBean.executeQuery();
if (rows==null || rows.size()==0) return 0;
Row row=(Row)rows.firstElement();
try {
float ret=Float.parseFloat(row.getString(1));
if (ret<=0) ret=0;
return Math.round(ret);
} catch (Exception ex) {
return 0;
}3. Part of Threadwatching.log
2006-10-13 16:46:56 Name: SMT Refreshing Threads
2006-10-13 16:46:56 Thread count: 4
2006-10-13 16:46:56 Active count: 4
2006-10-13 16:46:56 Active group count: 0
2006-10-13 16:46:56 Daemon: false
2006-10-13 16:46:56 Priority: 5
2006-10-13 16:46:57 Free memory: 192,228,944 bytes
2006-10-13 16:46:57 Max memory: 332,988,416 bytes
2006-10-13 16:46:57 Memory in use: 140,759,472 bytes
2006-10-13 16:46:57 ---------------------------------
2006-10-13 16:46:57 0. Name: CachedLayerTimer
2006-10-13 16:46:57 0. Id: 19
2006-10-13 16:46:57 0. Priority: 5
2006-10-13 16:46:57 0. Parent: SMT Refreshing Threads
2006-10-13 16:46:57 0. State: RUNNABLE
2006-10-13 16:46:57 0. Alive: true
2006-10-13 16:46:57 java.io.FileOutputStream.close0(Native Method)
2006-10-13 16:46:57 java.io.FileOutputStream.close(Unknown Source)
2006-10-13 16:46:57 sun.nio.cs.StreamEncoder$CharsetSE.implClose(Unknown Source)
2006-10-13 16:46:57 sun.nio.cs.StreamEncoder.close(Unknown Source)
2006-10-13 16:46:57 java.io.OutputStreamWriter.close(Unknown Source)
2006-10-13 16:46:57 xcompany.smtmonitor.chart.ChartCreator.createChart(ChartCreator.java:663)
2006-10-13 16:46:57 xcompany.smtmonitor.chart.ChartCreator.create(ChartCreator.java:441)
2006-10-13 16:46:57 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:463)
2006-10-13 16:46:57 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:46:57 java.util.TimerThread.run(Unknown Source)
Software runs well until I get the DataFactory.createPCVPPMforSiemens function in my code ->
2006-10-13 16:47:01 Name: SMT Refreshing Threads
2006-10-13 16:47:01 Thread count: 4
2006-10-13 16:47:01 Active count: 4
2006-10-13 16:47:01 Active group count: 0
2006-10-13 16:47:01 Daemon: false
2006-10-13 16:47:01 Priority: 5
2006-10-13 16:47:02 Free memory: 189,253,304 bytes
2006-10-13 16:47:02 Max memory: 332,988,416 bytes
2006-10-13 16:47:02 Memory in use: 143,735,112 bytes
2006-10-13 16:47:02 ---------------------------------
2006-10-13 16:47:02 0. Name: CachedLayerTimer
2006-10-13 16:47:02 0. Id: 19
2006-10-13 16:47:02 0. Priority: 5
2006-10-13 16:47:02 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:02 0. State: RUNNABLE
2006-10-13 16:47:02 0. Alive: true
2006-10-13 16:47:02 java.util.LinkedList$ListItr.previous(Unknown Source)
2006-10-13 16:47:02 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:174)
2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
2006-10-13 16:47:02 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
2006-10-13 16:47:02 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
2006-10-13 16:47:02 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
2006-10-13 16:47:02 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
2006-10-13 16:47:02 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
2006-10-13 16:47:02 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:02 java.util.TimerThread.run(Unknown Source)
2006-10-13 16:47:06 Name: SMT Refreshing Threads
2006-10-13 16:47:06 Thread count: 4
2006-10-13 16:47:06 Active count: 4
2006-10-13 16:47:06 Active group count: 0
2006-10-13 16:47:06 Daemon: false
2006-10-13 16:47:06 Priority: 5
2006-10-13 16:47:08 Free memory: 127,428,192 bytes
2006-10-13 16:47:08 Max memory: 332,988,416 bytes
2006-10-13 16:47:08 Memory in use: 205,560,224 bytes
2006-10-13 16:47:08 ---------------------------------
2006-10-13 16:47:08 0. Name: CachedLayerTimer
2006-10-13 16:47:08 0. Id: 19
2006-10-13 16:47:08 0. Priority: 5
2006-10-13 16:47:08 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:08 0. State: RUNNABLE
2006-10-13 16:47:08 0. Alive: true
2006-10-13 16:47:08 java.util.LinkedList$ListItr.previous(Unknown Source)
2006-10-13 16:47:08 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:174)
2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
2006-10-13 16:47:08 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
2006-10-13 16:47:08 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
2006-10-13 16:47:08 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
2006-10-13 16:47:08 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
2006-10-13 16:47:08 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
2006-10-13 16:47:08 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:08 java.util.TimerThread.run(Unknown Source)
2006-10-13 16:47:12 Name: SMT Refreshing Threads
2006-10-13 16:47:12 Thread count: 4
2006-10-13 16:47:12 Active count: 4
2006-10-13 16:47:12 Active group count: 0
2006-10-13 16:47:12 Daemon: false
2006-10-13 16:47:12 Priority: 5
2006-10-13 16:47:15 Free memory: 66,760,208 bytes
2006-10-13 16:47:15 Max memory: 332,988,416 bytes
2006-10-13 16:47:15 Memory in use: 266,228,208 bytes
2006-10-13 16:47:15 ---------------------------------
2006-10-13 16:47:15 0. Name: CachedLayerTimer
2006-10-13 16:47:15 0. Id: 19
2006-10-13 16:47:15 0. Priority: 5
2006-10-13 16:47:15 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:15 0. State: RUNNABLE
2006-10-13 16:47:15 0. Alive: true
2006-10-13 16:47:15 java.util.LinkedList.addBefore(Unknown Source)
2006-10-13 16:47:15 java.util.LinkedList.access$300(Unknown Source)
2006-10-13 16:47:15 java.util.LinkedList$ListItr.add(Unknown Source)
2006-10-13 16:47:15 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:175)
2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
2006-10-13 16:47:15 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
2006-10-13 16:47:15 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
2006-10-13 16:47:15 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
2006-10-13 16:47:15 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
2006-10-13 16:47:15 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
2006-10-13 16:47:15 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:15 java.util.TimerThread.run(Unknown Source)
2006-10-13 16:47:17 Name: SMT Refreshing Threads
2006-10-13 16:47:17 Thread count: 4
2006-10-13 16:47:17 Active count: 4
2006-10-13 16:47:17 Active group count: 0
2006-10-13 16:47:17 Daemon: false
2006-10-13 16:47:17 Priority: 5
2006-10-13 16:47:20 Free memory: 23,232,496 bytes
2006-10-13 16:47:20 Max memory: 332,988,416 bytes
2006-10-13 16:47:20 Memory in use: 309,755,920 bytes
2006-10-13 16:47:20 ---------------------------------
2006-10-13 16:47:20 0. Name: CachedLayerTimer
2006-10-13 16:47:20 0. Id: 19
2006-10-13 16:47:20 0. Priority: 5
2006-10-13 16:47:20 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:20 0. State: RUNNABLE
2006-10-13 16:47:20 0. Alive: true
2006-10-13 16:47:20 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:171)
2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
2006-10-13 16:47:20 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
2006-10-13 16:47:20 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
2006-10-13 16:47:20 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
2006-10-13 16:47:20 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
2006-10-13 16:47:20 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
2006-10-13 16:47:20 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:20 java.util.TimerThread.run(Unknown Source)
2006-10-13 16:47:23 Name: SMT Refreshing Threads
2006-10-13 16:47:23 Thread count: 4
2006-10-13 16:47:23 Active count: 4
2006-10-13 16:47:23 Active group count: 0
2006-10-13 16:47:23 Daemon: false
2006-10-13 16:47:23 Priority: 5
2006-10-13 16:47:26 Free memory: 4,907,336 bytes
2006-10-13 16:47:26 Max memory: 332,988,416 bytes
2006-10-13 16:47:26 Memory in use: 328,083,768 bytes
2006-10-13 16:47:26 ---------------------------------
2006-10-13 16:47:26 0. Name: CachedLayerTimer
2006-10-13 16:47:26 0. Id: 19
2006-10-13 16:47:26 0. Priority: 5
2006-10-13 16:47:26 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:26 0. State: RUNNABLE
2006-10-13 16:47:26 0. Alive: true
2006-10-13 16:47:26 java.util.LinkedList.addBefore(Unknown Source)
2006-10-13 16:47:26 java.util.LinkedList.access$300(Unknown Source)
2006-10-13 16:47:26 java.util.LinkedList$ListItr.add(Unknown Source)
2006-10-13 16:47:26 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:175)
2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
2006-10-13 16:47:26 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
2006-10-13 16:47:26 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
2006-10-13 16:47:26 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
2006-10-13 16:47:26 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
2006-10-13 16:47:26 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
2006-10-13 16:47:26 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:26 java.util.TimerThread.run(Unknown Source)
2006-10-13 16:47:35 Name: SMT Refreshing Threads
2006-10-13 16:47:37 Thread count: 4
2006-10-13 16:47:38 Active count: 4
2006-10-13 16:47:38 Active group count: 0
2006-10-13 16:47:38 Daemon: false
2006-10-13 16:47:38 Priority: 5
2006-10-13 16:47:42 Free memory: 35,316,120 bytes
2006-10-13 16:47:42 Max memory: 332,988,416 bytes
2006-10-13 16:47:42 Memory in use: 297,672,296 bytes
2006-10-13 16:47:42 ---------------------------------
2006-10-13 16:47:42 0. Name: CachedLayerTimer
2006-10-13 16:47:42 0. Id: 19
2006-10-13 16:47:42 0. Priority: 5
2006-10-13 16:47:42 0. Parent: SMT Refreshing Threads
2006-10-13 16:47:42 0. State: TIMED_WAITING
2006-10-13 16:47:42 0. Alive: true
2006-10-13 16:47:42 java.lang.Object.wait(Native Method)
2006-10-13 16:47:42 java.util.TimerThread.mainLoop(Unknown Source)
2006-10-13 16:47:42 java.util.TimerThread.run(Unknown Source)
4. Tomcat default logging file:
2006-10-13 16:47:36 ERROR CachedLayerRefreshenerTask: external error: Java heap space
5. DbConnectionBroker (connection pooling) logging file:
Handing out connection 1 --> 10/13/2006 04:47:01 PM
Handing out connection 0 --> 10/13/2006 04:47:01 PM
Handing out connection 1 --> 10/13/2006 04:47:01 PM
Handing out connection 0 --> 10/13/2006 04:47:02 PM
Warning. Connection 0 in use for 3141 ms
Warning. Connection 0 in use for 24891 ms
----> Error: Could not free connection!!!
I would appreciate for any help.What does your query bring back from this table?This is the query:
SELECT case sum(c.picked) when 0 then 0 else
((sum(c.picked)-(sum(c.picked)-(sum(c.vacuum)+sum(c.id
ent))))*cast((1000000/cast(sum(c.picked) as float))
as bigint)) end as PPM
FROM sip_comp c
LEFT JOIN sip_pcb pc ON pc.id=c.pcbid
LEFT JOIN sip_period p on p.id=pc.periodid
WHERE p.stationid=? AND pc.time BETWEEN ? AND ?Has anybody who knows SQL tried EXPLAIN PLAN to optimize this table? You're joining on a table with a million rows and you're wondering why the performance is poor?
What is the index situation with these tables?
.> When I execute it from query manager, it takes from 1
to 60 secs depend on servers availability. So how will that be any different for JDBC and Java?
..> You're right. Thats why I am here.
What I mean by that is we can't read minds, either. You need to get some hard data to tell you where the bottleneck is. Asking at a forum won't help.
But tell me, if the java process enters to this query
execution, and doesnt quit until OOM thrown, how can
be the problem in caching?I was guessing about caching, because I didn't know what the query was.
You expect a lot.
.> No.
Then how do you ever expect to solve this?
I tried YourKit Profiler at home, where I'm
developing software, but this OOM never thrown here,
even if I have the same database size.Then you aren't replicating the problem. You have to run it on the system that has the problem if you're going to solve it.
YourKit isn't an industry leader. How well do you know how to use it?
It just happened at the company where the system
runs, and I cannot run this profiler there because
the PC where my tomcat runs dramatically slowed.You have to run something to figure out what the problem is. What about Log4J, some trace logging statements and a batch job to harvest the log?
Bottom line: you've got to be a scientist and get some real data. We can theorize all we want here, but that won't get you to a solution.
% -
Sql query slowness due to rank and columns with null values:
Sql query slowness due to rank and columns with null values:
I have the following table in database with around 10 millions records:
Declaration:
create table PropertyOwners (
[Key] int not null primary key,
PropertyKey int not null,
BoughtDate DateTime,
OwnerKey int null,
GroupKey int null
go
[Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
With the following index:
CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]
[PropertyKey] ASC,
[BoughtDate] DESC,
[OwnerKey] DESC,
[GroupKey] DESC
go
Description of the case:
For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
declare @ownerKey int = 40000
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from (
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
) as result
where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
May be the slowness is due to as OwnerKey/GroupKey in the table can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
view.
Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
go
declare @ownerKey int = 1
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from #result as result
where result.[Rank]=1
and result.[OwnerKey]=@ownerKey
go
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Hi,
Facing Database performance issue while runing overnight batches.
Generate tfprof output for that batch and found some sql query which is having high elapsed time. Could any one please let me know what is the issue for this. It will also be great help if anyone suggest what need to be done as per tuning of this sql queries so as to get better responce time.
Waiting for your reply.
Effected SQL List:
INSERT INTO INVTRNEE (TRANS_SESSION, TRANS_SEQUENCE, TRANS_ORG_CHILD,
TRANS_PRD_CHILD, TRANS_TRN_CODE, TRANS_TYPE_CODE, TRANS_DATE, INV_MRPT_CODE,
INV_DRPT_CODE, TRANS_CURR_CODE, PROC_SOURCE, TRANS_REF, TRANS_REF2,
TRANS_QTY, TRANS_RETL, TRANS_COST, TRANS_VAT, TRANS_POS_EXT_TOTAL,
INNER_PK_TECH_KEY, TRANS_INNERS, TRANS_EACHES, TRANS_UOM, TRANS_WEIGHT,
TRANS_WEIGHT_UOM )
VALUES
(:B22 , :B1 , :B2 , :B3 , :B4 , :B5 , :B21 , :B6 , :B7 , :B8 , :B20 , :B19 ,
NULL, :B9 , :B10 , :B11 , 0.0, :B12 , :B13 , :B14 , :B15 , :B16 , :B17 ,
:B18 )
call count cpu elapsed disk query current rows
Parse 722 0.09 0.04 0 0 0 0
Execute 1060 7.96 83.01 11442 21598 88401 149973
Fetch 0 0.00 0.00 0 0 0 0
total 1782 8.05 83.06 11442 21598 88401 149973
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
UPDATE /*+ ROWID(TRFDTLEE) */TRFDTLEE SET TRF_STATUS = :B2
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
Parse 635 0.03 0.01 0 0 0 0
Execute 49902 14.48 271.25 41803 80704 355837 49902
Fetch 0 0.00 0.00 0 0 0 0
total 50537 14.51 271.27 41803 80704 355837 49902
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
DECLARE
var_trans_session invtrnee.trans_session%TYPE;
BEGIN
-- ADDED BY SHANKAR ON 08/29/97
-- GET THE NEXT AVAILABLE TRANS_SESSION
bastkey('trans_session',0,var_trans_session,'T');
-- MAS001
uk_trfbapuo_auto(var_trans_session,'UPLOAD','T',300);
-- MAS001 end
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 24191.23 24028.57 8172196 10533885 187888 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 24191.23 24028.57 8172196 10533885 187888 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
SELECT INNER_PK_TECH_KEY
FROM
PRDPCDEE WHERE PRD_LVL_CHILD = :B1 AND LOOSE_PACK_FLAG = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 56081 1.90 2.03 0 0 0 0
Fetch 56081 11.07 458.58 53792 246017 0 56081
total 112163 12.98 460.61 53792 246017 0 56081
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
******************First off, be aware of the assumptions I'm making. The SQL you presented above strongly suggests (to me at least) that you have cursor for loops. If that's the case, you need to review what their purpose is and look to convert them into single statement DML commands. For example if you have something like this
DECLARE
ln_Count NUMBER;
ln_SomeValue NUMBER;
BEGIN
FOR lcr_Row IN ( SELECT pk_id,col1,col2 FROM some_table)
LOOP
SELECT
COUNT(*)
INTO
ln_COunt
FROM
target_table
WHERE
pk_id = lcr_Row.pk_id;
IF ln_Count = 0 THEN
SELECT
some_value
INTO
ln_SomeValue
FROM
some_other_table
WHERE
pk_id = lcr_Row.col1
INSERT
INTO
target_table
( pk_id,
some_other_value,
col2
VALUES
( lcr_Row.col1,
ln_SomeValue,
lcr_Row.col2
ELSE
UPDATE
target_table
SET
some_other_value = ln_SomeValue
WHERE
pk_id = lcr_Row.col1;
END IF;
END LOOP;
END; it could be rewritten as
DECLARE
BEGIN
MERGE INTO target_table b
USING ( SELECT
a.pk_id,
a.col2,
b.some_value
FROM
some_table a,
some_other_table b
WHERE
b.pk_id = a.col1
) e
ON (b.pk_id = e.pk_id)
WHEN MATCHED THEN
UPDATE SET b.some_other_value = e.some_value
WHEN NOT MATCHED THEN
INSERT ( b.pk_id,
b.col2,
b.some_other_value)
VALUES( b.pk_id,
b.col2,
b.some_value);
END;It's going to take a bit of analysis and work but the fastest and most scalable way to approach processing data is to use SQL rather than PL/SQL. PL/SQL data processing i.e. cursor loops should be an option of last resort.
HTH
David -
In a SQL query whihc has join, How to reduce Multiple instance of a table
in a SQL query which has join, How to reduce Multiple instance of a table
Here is an example: I am using Oracle 9i
is there a way to reduce no.of Person instances from the following query? or can I optimize this query further?
TABLES:
mail_table
mail_id, from_person_id, to_person_id, cc_person_id, subject, body
person_table
person_id, name, email
QUERY:
SELECT p_from.name from, p_to.name to, p_cc.name cc, subject
FROM mail, person p_from, person p_to, person p_cc
WHERE from_person_id = p_from.person_id
AND to_person_id = p_to.person_id
AND cc_person_id = p_cc.person_id
Thnanks in advance,
Babu.SQL> select * from mail;
ID F T CC
1 1 2 3
SQL> select * from person;
PID NAME
1 a
2 b
3 c
--Query with only ne Instance of PERSON Table
SQL> select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
2 max(decode(m.t,p.pid,p.name)) to_name,
3 max(decode(m.cc,p.pid,p.name)) cc_name
4 from mail m,person p
5 where m.f = p.pid
6 or m.t = p.pid
7 or m.cc = p.pid
8 group by m.id;
ID FRM_NAME TO_NAME CC_NAME
1 a b c
--Expalin plan for "One instance" Query
SQL> explain plan for
2 select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
3 max(decode(m.t,p.pid,p.name)) to_name,
4 max(decode(m.cc,p.pid,p.name)) cc_name
5 from mail m,person p
6 where m.f = p.pid
7 or m.t = p.pid
8 or m.cc = p.pid
9 group by m.id;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 902563036
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 216 | 7 (15)| 00:00:01 |
| 1 | HASH GROUP BY | | 3 | 216 | 7 (15)| 00:00:01 |
| 2 | NESTED LOOPS | | 3 | 216 | 6 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| MAIL | 1 | 52 | 3 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| PERSON | 3 | 60 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
4 - filter("M"."F"="P"."PID" OR "M"."T"="P"."PID" OR
"M"."CC"="P"."PID")
Note
- dynamic sampling used for this statement
--Explain plan for "Normal" query
SQL> explain plan for
2 select m.id,pf.name fname,pt.name tname,pcc.name ccname
3 from mail m,person pf,person pt,person pcc
4 where m.f = pf.pid
5 and m.t = pt.pid
6 and m.cc = pcc.pid;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4145845855
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 112 | 14 (15)| 00:00:01 |
|* 1 | HASH JOIN | | 1 | 112 | 14 (15)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 92 | 10 (10)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 72 | 7 (15)| 00:00:01 |
| 4 | TABLE ACCESS FULL| MAIL | 1 | 52 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| PERSON | 3 | 60 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | PERSON | 3 | 60 | 3 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL | PERSON | 3 | 60 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("M"."CC"="PCC"."PID")
2 - access("M"."T"="PT"."PID")
3 - access("M"."F"="PF"."PID")
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement
25 rows selected.
Message was edited by:
jeneesh
No indexes created... -
Can users see the query plan of a SQL query in Oracle?
Hi,
I wonder for a given sql query, after the system optimization, can I see the query plan in oracle? If yes, how to do that? thank you.
XingYou can use explain plan in SQLPlus
SQL> explain plan for select * from user_tables;
Explained.
Elapsed: 00:00:01.63
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
Plan hash value: 806004009
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2014 | 1123K| 507 (6)| 00:00:07 |
|* 1 | HASH JOIN RIGHT OUTER | | 2014 | 1123K| 507 (6)| 00:00:07 |
| 2 | TABLE ACCESS FULL | SEG$ | 4809 | 206K| 34 (3)| 00:00:01 |
|* 3 | HASH JOIN RIGHT OUTER | | 1697 | 873K| 472 (6)| 00:00:06 |
| 4 | TABLE ACCESS FULL | USER$ | 74 | 1036 | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN OUTER | | 1697 | 850K| 468 (6)| 00:00:06 |
| 6 | NESTED LOOPS OUTER | | 1697 | 836K| 315 (6)| 00:00:04 |
|* 7 | HASH JOIN | | 1697 | 787K| 226 (8)| 00:00:03 |
| 8 | TABLE ACCESS FULL | TS$ | 13 | 221 | 5 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1697 | 759K| 221 (8)| 00:00:03 |
| 10 | MERGE JOIN CARTESIAN | | 1697 | 599K| 162 (10)| 00:00:02 |
|* 11 | HASH JOIN | | 1 | 326 | 1 (100)| 00:00:01 |
|* 12 | FIXED TABLE FULL | X$KSPPI | 1 | 55 | 0 (0)| 00:00:01 |
| 13 | FIXED TABLE FULL | X$KSPPCV | 100 | 27100 | 0 (0)| 00:00:01 |
| 14 | BUFFER SORT | | 1697 | 61092 | 162 (10)| 00:00:02 |
|* 15 | TABLE ACCESS FULL | OBJ$ | 1697 | 61092 | 161 (10)| 00:00:02 |
|* 16 | TABLE ACCESS CLUSTER | TAB$ | 1 | 96 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | I_OBJ# | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID| OBJ$ | 1 | 30 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | I_OBJ1 | 1 | | 0 (0)| 00:00:01 |
| 20 | TABLE ACCESS FULL | OBJ$ | 52728 | 411K| 151 (4)| 00:00:02 |
Predicate Information (identified by operation id):
1 - access("T"."FILE#"="S"."FILE#"(+) AND "T"."BLOCK#"="S"."BLOCK#"(+) AND
"T"."TS#"="S"."TS#"(+))
3 - access("CX"."OWNER#"="CU"."USER#"(+))
5 - access("T"."DATAOBJ#"="CX"."OBJ#"(+))
7 - access("T"."TS#"="TS"."TS#")
11 - access("KSPPI"."INDX"="KSPPCV"."INDX")
12 - filter("KSPPI"."KSPPINM"='_dml_monitoring_enabled')
15 - filter("O"."OWNER#"=USERENV('SCHEMAID') AND BITAND("O"."FLAGS",128)=0)
16 - filter(BITAND("T"."PROPERTY",1)=0)
17 - access("O"."OBJ#"="T"."OBJ#")
19 - access("T"."BOBJ#"="CO"."OBJ#"(+))
42 rows selected.
Elapsed: 00:00:03.61
SQL> If your plan table does not exist, execute the script $ORACLE_HOME/RDBMS/ADMIN/utlxplan.sql to create the table.
Maybe you are looking for
-
How do I sync my iPhone 4S calendar to my MacBook with the new OS?
I just upgraded and lost my calendar settings? I am not on iCloud. Thanks for any help.
-
How to raise a pop-up page using HTMLB
Hi friends, Can anybody tell me how to raise a pop-up page using HTMLB.But don't use webdynpro. I want to set a TextField on PAGE A,and a button ,when i click the button, it raise up a page B,Page B contains a HTMLB tree,when i click the tree no
-
White Screen With Apple Symbol and Turning Gear Appears At Startup
I downloaded the most recent software update with Mac OS X 10.5.4, a security update, and some sort of Safari update (not so sure). After I restarted the computer after the update, everything was okay. A few minutes later, I decided to restart it whi
-
photoshop elements 10 wont open, comes up with error 16, cant reinstall as i have no disc, it came preinstalled with the laptop. any ideas?
-
It says on the Mac App Store "You have already got OS X Mountain Lion on your computer, would you still like to download?" Will it charge me £13. again if i choose ok???