Perform a loop in SQL
Hello I am trying to test one of my PL/SQL procedures by trying to write one single query that should do exactly what the procedure does. The problem is the procedure contains a loop in it that looks like so:
For i in 1..10
loop
LS := LS + 1;
end loop;
does anyone know how i can replicate this loop in a query.
I came the the solution of just writing 10 levels of query. Where the inside query would calculate LS, the outer would calculate LS + 1 and so on and so forth. But this method is quite slow. Any other ideas????
Thanks ahead of time!
Tin
I am trying to test one of my PL/SQL procedures by trying to write one single query that should do exactly what the procedure doesForgive me, but that seems to be a bizarre way of tackling things. Apart from anything else, we should only be using PL/SQL for stuff we cannot do in SQL. So if you could write the query maybe you don't need the procedure...
Anyway, a better test would be to make an assertion about what the PL/SQL actually does and exercise the code. For instance, let's say you're inserting a record with LS as the primary key.
DECLARE
dummy varchar2(10);
BEGIN
call_your_proc;
SELECT null INTO dummy
FROM some_table
WHERE pk = 10;
EXCEPTION
WHEN no_data_found THEN
RAISE_APPLICATION_ERROR(-20000, 'The test failed!');
END;
/When you get tired of typing out stuff like this you are ready to investigate utPLSQL, an automated unit testing harness for PL/SQL, devloped by Steven Feuerstein (Whom God Preserve).
Cheers, APC
Similar Messages
-
How to avoid performance problems in PL/SQL?
How to avoid performance problems in PL/SQL?
As per my knowledge, below some points to avoid performance proble in PL/SQL.
Is there other point to avoid performance problems?
1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
2. EXECUTE IMMEDIATE is faster than DBMS_SQL
3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.Susil Kumar Nagarajan wrote:
1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
4. Optimize SQL statements if they need to
-> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
-> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
Justin -
Ability to perform ALTER SESSION SET SQL TRACE but not all alter clauses
I see that in order to run the ALTER SESSION SET SQL TRACE command, the user should be explicitly granted alter session privilege as the CREATE SESSION privilege alone is not enough. Is there a way to grant the ability to perform ALTER SESSION SET SQL TRACE but not the other clauses such as GUARD, PARALLEL & RESUMABLE?.
Thanks
SathyaIf you are using Oracle 10g and above, you can use DBMS_SESSION.session_trace_enable procedure,
it doesn't require alter session system privilege.
Simple example:
SQL> connect test/test@//192.168.1.2:1521/xe
Connected.
SQL> alter session set tracefile_identifier='my_id';
Session altered.
SQL> alter session set sql_trace = true
2 ;
alter session set sql_trace = true
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> execute dbms_session.session_trace_enable;
PL/SQL procedure successfully completed.
SQL> select * from user_sys_privs;
USERNAME PRIVILEGE ADM
TEST CREATE PROCEDURE NO
TEST CREATE TABLE NO
TEST CREATE SEQUENCE NO
TEST CREATE TRIGGER NO
TEST SELECT ANY DICTIONARY NO
TEST CREATE SYNONYM NO
TEST UNLIMITED TABLESPACE NO
7 rows selected.
SQL> execute dbms_session.session_trace_disable;
PL/SQL procedure successfully completed.
SQL> disconnect
Disconnected from Oracle Database 10g Release 10.2.0.1.0 - Productionand here is result from tkprof:
TKPROF: Release 10.2.0.1.0 - Production on So Paź 23 00:53:07 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: xe_ora_176_my_id.trc
( ---- cut ---- )
select *
from
user_sys_privs
call count cpu elapsed disk query current rows
Parse 1 0.08 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.01 0 15 0 7
total 4 0.09 0.09 0 15 0 7
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61
Rows Row Source Operation
7 HASH GROUP BY (cr=15 pr=0 pw=0 time=11494 us)
7 CONCATENATION (cr=15 pr=0 pw=0 time=4913 us)
0 MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=1169 us)
0 NESTED LOOPS (cr=4 pr=0 pw=0 time=793 us)
0 TABLE ACCESS FULL SYSAUTH$ (cr=4 pr=0 pw=0 time=592 us)
0 INDEX RANGE SCAN I_SYSTEM_PRIVILEGE_MAP (cr=0 pr=0 pw=0 time=0 us)(object id 312)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)
7 NESTED LOOPS (cr=11 pr=0 pw=0 time=3429 us)
9 HASH JOIN (cr=9 pr=0 pw=0 time=2705 us)
9 TABLE ACCESS FULL SYSAUTH$ (cr=4 pr=0 pw=0 time=512 us)
63 TABLE ACCESS FULL USER$ (cr=5 pr=0 pw=0 time=914 us)
7 INDEX RANGE SCAN I_SYSTEM_PRIVILEGE_MAP (cr=2 pr=0 pw=0 time=510 us)(object id 312)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 20.64 20.65
BEGIN dbms_session.session_trace_disable; END;
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.00 0 0 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 -
Improve the Performance of Loops
Has anyone read "Improve the Performance of Loops" on http://archive.devx.com/free/tips/tipview.asp?content_id=3945 ?
If so, would you agree that what's written there is absolute b.....t?
He claims that decreasing the counter improves the performance and tries to prove it with the program:
for (int i=0,n=Integer.MAX_VALUE;i<n;i++){
a =-a;
// is slower than
long midTime =System.currentTimeMillis();
for (int i=Integer.MAX_VALUE-1 ; i>=0 ; i--){
a =-a;
}The result is pretty impressive:
Increasing Loop:4891
Decreasing Loop:3781
The only stupid thing is that:
1. if you run it more times you get
Increasing Loop:4891
Decreasing Loop:3781
Increasing Loop:3782
Decreasing Loop:3796
Increasing Loop:3891
Decreasing Loop:3891
Increasing Loop:3828
Decreasing Loop:3937
Increasing Loop:3891
Decreasing Loop:3906
Increasing Loop:3860
Decreasing Loop:3937
Increasing Loop:3891
Decreasing Loop:3906
So you can see that the performance is worse for decreasing loops after hotspot warmed up.
2. If you run it with -server, you'll even get:
Increasing Loop:16
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
Increasing Loop:0
Decreasing Loop:0
This shows that hotspot sever is much more clever than some programmers.
Even if you change the code to do something bit better like
public TimeLoop() {
int a = 2,b=2;
long startTime = System.currentTimeMillis();
for (int i = 0, n = Integer.MAX_VALUE; i < n; i++) {
a ^= i;
long midTime = System.currentTimeMillis();
for (int i = Integer.MAX_VALUE - 1; i >= 0; i--) {
a ^= i;
long endTime = System.currentTimeMillis();
System.out.println("Increasing Loop:" + (midTime - startTime));
System.out.println("Decreasing Loop:" + (endTime - midTime));
System.out.println("a="+a+" b="+b); // Hotspot must perform _some_ kind of calculation to print this
}You'll find that it doesn't really matter whether you're xoring in increasing or decreasing order.
For -client:
Increasing Loop:296
Decreasing Loop:297
a=2 b=2
Increasing Loop:297
Decreasing Loop:281
a=2 b=2
Increasing Loop:297
Decreasing Loop:297
a=2 b=2
For -server:
Increasing Loop:141
Decreasing Loop:156
a=2 b=2
Increasing Loop:141
Decreasing Loop:141
a=2 b=2
Increasing Loop:140
Decreasing Loop:156
a=2 b=2
(Last three runs for each).
And I don't believe that accessing array.length is slower than storing the length in an int and comparing against that int!
Please let's just stop posting silly perfomance tuning tips!Well, you can always look at the bytecode produced. I wrote two little classes:public class t {
public static void main ( String[] args ) {
int a = 0;
for (int i=0,n=Integer.MAX_VALUE;i<n;i++){ a =-a; }
}andpublic class t1 {
public static void main ( String[] args ) {
int a = 0;
for (int i=Integer.MAX_VALUE-1 ; i>=0 ; i--){ a =-a; }
}And here's the bytecode for their main() methods. (Extra/different bytecodes in "t" are marked):t: (incrementing)Method void main(java.lang.String[])
0 iconst_0
1 istore_1
==>2 iconst_0
3 istore_2
4 ldc #2 <Integer 2147483647>
6 istore_3
7 goto 16
10 iload_1
11 ineg
12 istore_1
13 iinc 2 1
16 iload_2
==>17 iload_3
==>18 if_icmplt 10
21 return
t1: (decrementing)
Method void main(java.lang.String[])
0 iconst_0
1 istore_1
2 ldc #2 <Integer 2147483646>
4 istore_2
5 goto 14
8 iload_1
9 ineg
10 istore_1
11 iinc 2 -1
14 iload_2
15 ifge 8
18 return The decrementing code does use fewer bytecodes to do its thing.
However, as someone pointed out - once Hotspot gets involved, all bets are off. And as someone else pointed out, if the body of the loop does nearly anything at all, the 2-bytecode-difference is going to get completely swamped.
In general, this is the kind of micro-optimizing that I'd ignore completely...
Grant -
I need one recurcive(unended loop) pl/sql example, its very urgent pls
Hi,
I need one recurcive (unended loop) pl/sql example, its very urgent pls
Thanks,
Sathis.I suppose you'll want to know how to get out of your undended loop too (although that does stop it being unended).
Example...
SQL> ed
Wrote file afiedt.buf
1 DECLARE
2 v_cnt NUMBER := 0;
3 BEGIN
4 LOOP
5 EXIT WHEN v_cnt = 1000;
6 v_cnt := v_cnt + 1;
7 END LOOP;
8* END;
SQL> /
PL/SQL procedure successfully completed.
SQL> -
Attempted to perform an unauthorized operation sql server 2008 ( Windows 7 )
Dear All,
I am facing following error Sql 2008 setup "Attempted to perform an unauthorized operation sql server 2008 ( Windows 7 )"
I tryed to run setup as administrator
I tryed to run setup to create new user as an administrator and login in new user then run setup
I tryed to run in compatible mode windows xp sp3.
I tryed to login in Administrato account and run setup
in this all time same error came i am tired too much. because of this i format my pc and install windows 7 but still it is same please help me.
Thanks is advance
Regards
Uday
[email protected]Hi,
Can you locate summary.txt file and details.txt file from location
C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log
Upload these to files on some shared location like dropbox, onedrive etc and post link here I would analyze and see where actually problem is coming.
This link would also help you in searching log files.
PS: Dont reply to duplicate thread you posted
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Performance issue with pl/sql code
Hi Oracle Gurus,
I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Elapsed Times are 30minutes , 40 minutes, 65 minutes , 3 minutes ,3 seconds.
Expected elapsed time is maximum of 3 minutes. ( But some times it took 3 seconds too...! )
Output on all different executions are same that is deletion and insertion of 12K records into a table.
Here is the auto trace details of two different scenarios.
Slow execution - 33.65 minutes
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 1,712,343 1,712,342.6 41.4
CPU Time (ms) 1,679,689 1,679,688.6 44.7
Executions 1 N/A N/A
Buffer Gets ########## 167,257,973.0 86.9
Disk Reads 1,284 1,284.0 0.4
Parse Calls 1 1.0 0.0
User I/O Wait Time (ms) 4,264 N/A N/A
Cluster Wait Time (ms) 3,468 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 6 N/A N/A
Invalidations 0 N/A N/A
Version Count 4 N/A N/A
Sharable Mem(KB) 85 N/A N/A
-------------------------------------------------------------Fast Exection : 5 seconds
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 41,550 41,550.3 0.7
CPU Time (ms) 40,776 40,776.3 1.0
Executions 1 N/A N/A
Buffer Gets 2,995,677 2,995,677.0 4.2
Disk Reads 22 22.0 0.0
Parse Calls 1 1.0 0.0
User I/O Wait Time (ms) 162 N/A N/A
Cluster Wait Time (ms) 621 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 55 N/A N/A
Invalidations 0 N/A N/A
Version Count 4 N/A N/A
Sharable Mem(KB) 85 N/A N/A
-------------------------------------------------------------For security reasons, I cannot share the actual code. Its a report generating code that deletes and load the data into table using insert into select statement.
Delete from table ;
cursor X to get the master data ( 98 records )
For each X loop
insert into tableA select * from tables where a= X.a and b= X.b and c=X.c ..... ;
-- 12 K records inserted on average
insert into tableB select * from tables where a= X.a and b= X.b and c=X.c ..... ;
-- 12 K records inserted on average
end loop ;1. The select query is complex with bind variables ( explain plan varies for each values )
2. I have checked the tablespace of the tables involved, it is 82% used. DBA confirmed that it is not the reason.
3. Disk reads are high during long execution.
4. At long running times, I can see a db sequential read wait event on a index object. This index is on the table where data is inserted.
All I need to find is why this code is taking 3 seconds and 60 minutes on the same day and on the consecutive executions ?
Is there any other approach to find the root cause of this behaviour and to fix it ? Kindly adivse.
Thanks in advance your help.
Regards,
Hari
Edited by: BluShadow on 26-Sep-2012 08:24
edited to add {noformat}{noformat} tags. You've been a member long enough to know to do this yourself... so please do so in future. ({message:id=9360002})Hariharan ST wrote:
Hi Oracle Gurus,
I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Please reedit your post and add some code tags around the trace information. This would improve readability greatly and will help us to help you
example
{<b></b>code}
select * from dual;{<b></b>code}
Based upon your description I can imagine two things.
a) The execution plan for the select query does change frequently.
A typical reason can be not up to date statistics.
b) Some locking / wait conflict. For example upon a UK index.
Are there any other operations going on while it is slow? If anybody inserts a value, then your session will wait, if the same (PK/UK) value also is to be inserted.
Those wait events can be recognized using standard tools like oracle sql developer or enterprise manager while the query is slow.
Also go through the links that are in the FAQ. They tell you how to get better information for makeing a tuning request.
SQL and PL/SQL FAQ
Edited by: Sven W. on Sep 25, 2012 6:41 PM -
Looping through SQL statements in shell script
Hello members,
I'm working on the Solaris environment and the DB i'm using is Oracle 10g. Skeleton of what I'm attempting;
Write a ksh script to perform the following. I have no idea how to include my sql query within a shell script and loop through the statements. Have therefore given a jist of what I'm attempting, below.
1. Copy file to be processed (one file at a time, from a list of 10 files in the folder ).
for i in *
do
cp $i /home/temp[/CODE]2 . Create a snapshot(n) table : Initialize n = 1 -- -- To create my snapshot table and inserts records in SQL
create table test insert account_no, balance from records_all;3. Checking if the table has been created successfully:
select count(*) from snapshot1 -- query out the number of records in the table -- always fixed, say at 400000
if( select count(*) from snapshot(n) = 400000 )
echo " table creation successful.. proceed to the next step "
else
echo " problem creating table, exiting the script .. " 4. If table creation is successful,
echo " select max(value) from results_all " -- printing the max value to console
5. Process my files using the following jobs:
./runscript.ksh - READ -i $m ( m - initial value 001 )
./runscript.ksh - WRITE -i $m ( m - initial value 001 -- same as READ process_id )
-- increment m by 1
6. Wait for success log
tail -f log($m)* | -egrep "^SUCCESS"7. looping to step1 to :
Copy file 2 to temp folder;
create snapshot(n+1) table
Exit when all the files have been copied for processing.
done -- End of Step 1
Pointers on getting me moving will be very valuable.
thanks,
KrisHi,
Are you inserting the data from file or from some table.
If it is from file, I suggest sql loader would be better, I suppose.
If it is from table, then something like this
for i in *
do
sqlplus username/password@db_name<<EOF
create table test select * account_no, balance from records_all;
EOF
exit
doneAnurag -
Kind of loop in sql? Any alternative?
Hi,
We have the following table
create table orders
order_id NUMBER(10),
vehicle_id NUMEBR(10),
customer_id NUMBER(10),
data VARCHAR(10)
order_id, customer_id and vehicle_id are indexed.
In this table are stored multiple orders for multiple vehicles.
I need an sql-statements which returns me the last 5 orders for each truck.
For only one vehicle its no problem:
select * from orders
where vehicle_id = <ID>
and rownum <=5
order by order_id desc;
But I need something like a loop to perform this statement for each vehicle_id.
Or is there any way to put it into a subselect?
Any ideas are welcome ;-)
Thanks in advance,
AndreasHello
Effectively by having the bind variable in there you are partitioning by customer and vehicle id, so by adding customer_id into the partition statement, the optimiser should be able to push the bind variable right down to the inner most view...
XXX> CREATE TABLE dt_orders
2 ( order_id NUMBER NOT NULL,
3 customer_id NUMBER NOT NULL,
4 vehicle_id NUMBER NOT NULL,
5 some_padding VARCHAR2(100) NOT NULL
6 )
7 /
Table created.
Elapsed: 00:00:00.23
XXX> INSERT INTO dt_orders SELECT ROWNUM ID, MOD(ROWNUM,100),MOD(ROWNUM,100), lpad(
2 /
10000 rows created.
Elapsed: 00:00:00.43
XXX> CREATE INDEX dt_orders_i1 ON dt_orders(customer_id)
2 /
Index created.
Elapsed: 00:00:00.17
XXX> select *
2 from (
3 select o.*, rank() over(partition by vehicle_id order by order_id desc) rk
4 from dt_orders o
5 where customer_id = :var_cust_id
6 )
7 where rk <= 5;
5 rows selected.
Elapsed: 00:00:00.11
Execution Plan
Plan hash value: 3174093828
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 1 | VIEW | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 2 | WINDOW SORT PUSHED RANK | | 107 | 9737 | 22 (5)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DT_ORDERS | 107 | 9737 | 21 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | DT_ORDERS_I1 | 43 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5)
2 - filter(RANK() OVER ( PARTITION BY "VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
4 - access("CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <----
Note
- dynamic sampling used for this statement
Statistics
36 recursive calls
0 db block gets
247 consistent gets
2 physical reads
0 redo size
518 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedyour original statement showing that the bind variable has been applied to access the dt_orders table via the index (predicate 4)
If I change the statement to put the bind variable outside the inline view, we now do a full scan and you can see from predicate 1 that the customer id is being filtered at the highest level.
XXX> select *
2 from (
3 select o.*, rank() over(partition by vehicle_id order by order_id desc) rk
4 from dt_orders o
5 )
6 where rk <= 5
7 AND customer_id = :var_cust_id ;
5 rows selected.
Elapsed: 00:00:00.32
Execution Plan
Plan hash value: 3560032888
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10696 | 1086K| | 268 (2)| 00:00:04 |
|* 1 | VIEW | | 10696 | 1086K| | 268 (2)| 00:00:04 |
|* 2 | WINDOW SORT PUSHED RANK| | 10696 | 950K| 2216K| 268 (2)| 00:00:04 |
| 3 | TABLE ACCESS FULL | DT_ORDERS | 10696 | 950K| | 39 (3)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5 AND "CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <---
2 - filter(RANK() OVER ( PARTITION BY "VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
240 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedBut those two statements are really the same. By applying the filter inside the view as in your original, it means it's only going to calculate the rank for those customers. So we can add the customer id to the partition by statement which means the optimiser can safely push the predicate back down to the access of the orders table..
XXX> select *
2 from (
3 select o.*, rank() over(partition by customer_id,vehicle_id order by order_id desc) rk
4 from dt_orders o
5 )
6 where rk <= 5
7 AND customer_id = :var_cust_id ;
5 rows selected.
Elapsed: 00:00:00.04
Execution Plan
Plan hash value: 3174093828
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 1 | VIEW | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 2 | WINDOW SORT PUSHED RANK | | 107 | 9737 | 22 (5)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DT_ORDERS | 107 | 9737 | 21 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | DT_ORDERS_I1 | 43 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5)
2 - filter(RANK() OVER ( PARTITION BY "CUSTOMER_ID","VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
4 - access("O"."CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <----
Note
- dynamic sampling used for this statement
Statistics
9 recursive calls
0 db block gets
244 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedHTH
David -
Hi Have below requirement,
a. Take the first 7 digits of a Number string and multiply the 1st digit by 8, the 2nd digit by 7 .....etc..... the 6th digit by 3 and the 7th digit by 2;
b. Sum up the values of the above seven multiplication;
c. Subtract 97 from the value at b. above until you have a negative number;
d. Digits 8 and 9 of the number, the “check digits”, will correspond with the absolute value of the negative number determined at c. above.
Example:
Passing input as number specific one, 'SK391313073'
now first 7 digits are 3913130 and perform Sum like this (3*8)+(9*7)+(1*6)+(3*5)+(1*4)+(3*3)+(0*2) [Sum wil be 121]
I have achieved till above through sql,
Now the sum I should subtract with 97 till it get equal with 8th and 9th digit (here I should loop)
in this case
121-97=24 so this is not equal to 73 (last two digits of input)
so now I need to again perform 24-97 which is -73 and abs(-73)=73 so I need to stop here,
All your help is appreciated,,Here's a basic example of a FOR loop type structure in SQL - the pseudo column level serves as the loop variable.
You can use the WITH clause as "programming blocks" to calculate something specific and re-use the output of that in another "programming block".
The following example demonstrates the basic approach:
SQL> var n varchar2(20)
SQL> exec :n := 'SK391313073';
PL/SQL procedure successfully completed.
// parse the input string as per requirements
SQL> with number_parse as (
2 select
3 level as i,
4 substr(:n,level,1) as ch
5 from dual
6 connect by level <= length(:n)
7 )
8 select
9 *
10 from number_parse
11 /
I CH
1 S
2 K
3 3
4 9
5 1
6 3
7 1
8 3
9 0
10 7
11 3
11 rows selected.
// tad more complex: using this parsing output, determine the 1st 7 digits
SQL> with number_parse as (
2 select
3 level as i,
4 substr(:n,level,1) as ch
5 from dual
6 connect by level <= length(:n)
7 )
8 select i,ch from (
9 select
10 rownum as rno,
11 n.*
12 from number_parse n
13 where ch in ('0','1','2','3','4','5','6','7','8','9')
14 )
15 where rno between 1 and 7
16 /
I CH
3 3
4 9
5 1
6 3
7 1
8 3
9 0
7 rows selected.
// now add the calculation as required
SQL> with number_parse as (
2 select
3 level as i,
4 substr(:n,level,1) as ch
5 from dual
6 connect by level <= length(:n)
7 ),
8 first_7 as(
9 select i,ch from (
10 select
11 rownum as rno,
12 n.*
13 from number_parse n
14 where ch in ('0','1','2','3','4','5','6','7','8','9')
15 )
16 where rno between 1 and 7
17 order by i desc
18 )
19 select
20 ch,
21 ch||' * '||to_char(rownum+1) as calc,
22 to_number(ch)*(rownum+1) as result
23 from first_7
24 order by i
25 /
CH CALC RESULT
3 3 * 8 24
9 9 * 7 63
1 1 * 6 6
3 3 * 5 15
1 1 * 4 4
3 3 * 3 9
0 0 * 2 0
7 rows selected.
SQL> So SQL is capable of a FOR loop and similar type of (basic) processing that one can do in PL/SQL. However, the SQL language is not Turing Complete, and things can get messy with the above approach, in comparison with doing it in PL/SQL instead. -
Improve performance on specific PL/SQL code
In these three querys there are identical "select" statements in the "where" sections of the querys.
SELECT sum(num_tit_par)
INTO posicion
FROM posi_hoy_rbo
WHERE fec_pos = fecha
and enti_cli in (select cod_int_ent from enti_rbo where enti_efe = entidad);
SELECT sum(mctasald)
INTO saldo_cuenta
FROM gbinmcta_rbo
WHERE mctafech = fecha
and mctaclav in (select cod_int_ent from enti_rbo where enti_efe = entidad);
SELECT sum(tarjdisp)
INTO saldo_tarjeta
FROM gbintarj_rbo
WHERE tarjfech = fecha
and tarjclav in (select cod_int_ent from enti_rbo where enti_efe = entidad);
Is there any way, using PL/SQL code, to store the results of this "select" statement in a kind of variable (varrays, nested tables...)
and then use this variable on the "where" clauses?. If it exists, could someone explain it in detail?
This way, I suppose that this "select" should be only once executed and the performance improved.
Thank You Very MuchDaniel,
I happened to have this code, which I posted in the past. simplified
You could ignore the creation of package and you could include the functionality of procedure A into procedure B in the following example,-- create a SQL type
SQL> create or replace type numTyp as table of number
2 /
Type created.
SQL> create or replace package test_pkg as
2 procedure B;
3 end;
4 /
Package created.
SQL> create or replace package body test_pkg as
2
2 numArray numTyp := numTyp(); -- initialize
3
3 procedure A Is -- Fills the array
4 Begin
5 numArray.extend(2);
6 numArray(1) := 10;
7 numArray(2) := 20;
8 End;
9
10 procedure B Is
11 Begin
12 A; -- call to procedure A
13 For rec in (select empno from my_emp where deptno IN
14 (Select a.column_value val
15 From THE ( select cast(numArray as numTyp) from dual ) a))
16 loop
17 dbms_output.put_line(rec.empno);
18 end loop;
19 end;
20 end;
21 /
Package body created.
SQL> exec test_pkg.B;
7782
7839
7934
7369
7876
7902
7788
7566
PL/SQL procedure successfully completed.Not a great example, but it shows how to use SQL type nested tables to use in SQL join operations. Hope it helps.
Thx,
Sri -
Performance comparison between using sql and pl/sql for same purpose
Hi All,
I have to do some huge inserts into a table from some other tables. I have 2 option:
Option 1
======
a. Declare a cusor for a query involving all source tables, this will return the data to be populated into target
b. Use a cursor for loop to loop through all the records in the cursor
c. for each iteration of the loop, populate target columns, do any calculations/function calls required to populate derived columns, and then insert the resulting record into target table
Option 2
======
Just write a big single "Insert Into ..... Select ..." statement, doing alll calculations/funtion calls in the select statement generating the source data.
Now my question is ; which option is fast? and why. This operation is performace critical so I need the option which will run faster. Can anybody help???
Thanks in Advance.user9314072 wrote:
while the above comments are vaild, you should concider maintainability in you code. Even if you can write the sql it might be the code becomes complex making tuning very dificult, and derade performance.Beg to differ on that. Regardless of complexity of code, SQL is always faster than PL/SQL when dealing with SQL data. The reason for that is that PL/SQL still needs to use SQL anyway for row retrieval, and in addition it needs to copy row data from the buffer cache into the PL/SQL PGA. This is an overhead that does not exist in SQL.
So if you are processing a 100 million rows with a complex 100 line SQL statement, versus a 100 million rows 100 line PL/SQL procedure, SQL will always be faster.
It is a trade off, my experiance is large SQL's 100's lines long become hard to manage. You need to ask yourself why there are 100's of line of SQL. This points to an underlying problem. A flaky data model is very likely the cause. Or not using SQL correctly. Many times a 100 line SQL can be changed to a 10 liner by introducing different logic that solves the exact same problem easier and faster (e.g. using analytical SQL, thinking "+out-of-the-box+").
Also, 100's of line of SQL points to a performance issue always. And it does not matter where you move this code logic to PL/SQL or Java or elsewhere, the performance problem will remain. Moving the problem from SQL to PL/SQL or Java does not reduce the number of rows to process, or make a significant change in the number of CPU instructions to be executed. And there's the above overhead mentioned - pulling SQL data into a client memory segment for processing (an overhead that does not exist using SQL).
So how do you address this then? Assuming the data model is correct, then there are 2 primary methods to address the 100's of SQL lines and its associated performance problem.
Modularise the SQL. Make the 100's of lines easier to maintain and understand. This can be done using VIEWS and the SQL WITH clause.
As for the associated performance issue - materialised views comes to mind as an excellent method to address this type of problem.
my advice is keep things simple, because soon or later you will need to change the code.I'm all for that - but introducing more moving parts like PL/SQL or Java and ref cursors and bulk fetching and so on.. how does that reduce complexity?
SQL is the first and best place to solve row crunching problems. Do not be fooled into thinking that you can achieve that same performance using PL/SQL or Java. -
Performance syntax loop at and read table
in the routine , for reading one line in a internal table , the syntaxe
loop at xxx where and read tabl exxx with key XXXX
has a great difference on performance or not?Loop at statement is used only for processing multiple records.Read table is used for reading a particluar record of an internal table.If you just need to check whether record exists in internal table, use can sort and use binary search with TRANSPORTING NO FIELDS addition. Also, try to use field symbols so that performance is increased.
-
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
Performance degradation in pl/sql parsing
We are trying to use xml pl/sql parser and noticed performance degradation as we run multiple times. We zeroed into the following clause:
doc := xmlparser.getDocument(p);
The first time the procedure is run the elapsed time at sqlplus is something like .45sec, but as we run repeatedly in the same session the elapsed time keeps on increasing by .02 seconds. If we log out and start fresh, we start again from .45sec.
We noticed similar degradation with
p := xmlparser.newParser;
but we got around by making the 'p' variable as package variable, initializing it once and using the same for all invocations.
Any suggestions?
Thank you.Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
But please provide some more details like your database version etc.
I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
SQL> alter session set events '10046 trace name context forever, level 12';
SQL> execute your PL/SQL code here
SQL> exitWill give you a .trc file in your udump directory on the server.
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
Also this informative thread can give you more ideas:
HOW TO: Post a SQL statement tuning request - template posting
as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29
Maybe you are looking for
-
How do i get my external hard drive to show up in finder sidebar?
I recently had to restore my computer and my hard drive won't show up in the sidebar anymore. I've been through the finder preferences and have the option checked for it to show up but still no luck. Any suggestions?
-
No disk in tray error and general frustratio
Hello all, Ive went thru the FAQs in this section and tried a clean install and everything on my Audigy 2 Gamer Edition. My problem is, none of the included diagnostic software works anymore. Speaker settings, Creative Diagnostics, Graphic Equalizer
-
MacPro/multiple OS drives permission problems? Need Help
Hi, I am running a MacPro with both a Lepoard and Snow Leopard OS drives. I had a folder of image files on my Leopard desktop which I want to dupelicate on my Snow Leopard desktop to test my photo applications setup which I am trying to mirror on Sno
-
Hi there, I am trying to retrieve the users MAC address to implement an authorization process on a AIR application. I have read that this is possible with the latest version of the AIR SDK (2.0). I am developing in Flash Builder and have discovered t
-
FormattedTextView in NW 7.0?
Hello! If and when will the UI elements FormattedTextView and FormattedTextEdit be available in Web Dynpro for Java 7.0? I have played around with them in NWCE 7.1 and I have seen documentation for them in WDP Abap NW04s, but I cant find the ui eleme