How to improve the query performance or tune query from Explain Plan
Hi
The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204
8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1
5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1
2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1
1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1
3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1
6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1
13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1
21 FILTER
16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49
20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1
18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1
23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204
42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204
38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204
34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925
30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699
26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35
37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38
36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2
35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2
41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41
40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2
39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2
44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1
43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
damorgan wrote:
Tuning is NOT about reducing the cost of i/o.
i/o is only one of many contributors to cost and only one of many contributors to waits.
Any time you would like to explore this further run this code:
SELECT 1 FROM dual
WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
And when I say "extreme" I mean "EXTREME!"
You've been warned.I think you just need a faster server.
SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');
no rows selected
Elapsed: 00:00:00.00
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
243 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processedRepeated from an Oracle 10.2.0.x instance:
SQL> SELECT DISTINCT SID FROM V$MYSTAT;
SID
310
SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
Session altered.
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
COLUMN STAT_NAME FORMAT A35 TRU
SET PAGESIZE 200
SELECT
STAT_NAME,
VALUE
FROM
V$SESS_TIME_MODEL
WHERE
SID=310;
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0The session is not reporting additional CPU usage or parse time.
Let's check one of the session's statistics:
SELECT
SS.VALUE
FROM
V$SESSTAT SS,
V$STATNAME SN
WHERE
SN.NAME='consistent gets'
AND SN.STATISTIC#=SS.STATISTIC#
AND SS.SID=310;
VALUE
163Not many consistent gets after 20+ minutes.
Let's take a look at the plan:
SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
al%';
SQL_ID CHILD_NUMBER
04mpgrzhsv72w 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
select 1 from dual where regexp_like (' ','^*[ ]*a')
NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check v$sql_p
lan)No plan...
Let's take a look at the 10053 trace file:
Registered qb: SEL$1 0x19157f38 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 7uqx4guu04x3g.
CVM: Considering view merge in query block SEL$1 (#0)
CBQT: Validity checks failed for 7uqx4guu04x3g.
Subquery Unnest
SU: Considering subquery unnesting in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$1 (#0).
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
PM: PM bypassed: Outer query contains no views.
FPD: Considering simple filter push in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
REGEXP_LIKE (' ','^*[ ]*a')
kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
predicates with check contraints: REGEXP_LIKE (' ','^*[ ]*a')
after transitive predicate generation: REGEXP_LIKE (' ','^*[ ]*a')
finally: REGEXP_LIKE (' ','^*[ ]*a')
apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
kkoqbc-start
: call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Similar Messages
-
How to improve the load performance while using Datasources for the Invoice
HI All,
How to improve the load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs to update data from R/3 to 0ASA_DS01.
If I load through flat file it will load with in ~20 Min for the same amount of data.
Please suggest how to improve load performance.
PS: I have done the Inpo package settings as per the OSS note.
Regads
Srininivasarao.Namburi.Hi Srinivas,
Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
Thanks,
Divyesh
Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM -
How to improve the OpenGL performance for AE
I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
Any idea how to improve the OpenGL performance please ?
Regardsjuskocf wrote:
But to scrub the timeline smoothly, I think OpenGL plays an important role.
No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
juskocf wrote:
Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects. Just wonder why the OpenGL Performance of such an expensive card should be so weak.
It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
Mylenium -
How to improve the write performance of the database
Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
Currently, the database get no response and the CPU is 100% used.
How to tuning this? thanks in advance.Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
3. If RAID which implementation of RAID and on how many disks.
4. If NAS or SAN how is the read-write cache configured.
5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
6. What, in addition to the Oracle database, is running on the server?
2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
SQL> create table t (
2 testcol varchar2(4000));
Table created.
SQL> set timing on
SQL> BEGIN
2 FOR i IN 1..500 LOOP
3 INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
4 END LOOP;
5 END;
6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.07
SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product. -
How to improve the load performance
can any body tell me how to improve the load performance
Hi,
for all loads: improve your ABAP code in routines.
for master data load:
- load master data attributes before the charateristic itself
- switch number range buffering on for initial loads
for transactional loads:
- load all your master data IObjs prior loading your cube / ODS
- depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
- switch on number range buffering for dimensions with high number of records for initial loads
- switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
these recommendations are just some among others like system tuning, DB parameters...
hope this helps...
Olivier. -
How to improve the extactor performance?
Dear all,
I have an ODS with active data (1,180,000Records).
I also have an InfoCube. It has a characteristices C1 with 5 navigation attributes.
For C1, it has 4,300,000 records in the master.
I have tried to load data to InfoCube from the ODS.
But it takes more than 12 hours, how can I improve the performance or is there any bug in BW for so many records extracting?
Thank you very much.
Sevendear Seven,
do you try to delete infocube's index first ?
infocube->manage->performance->delete indexes
take a look oss note
130253-General tips on uploading transaction data to BW
Tip 8:
When you load large quantities of data in InfoCubes, you should delete
the secodary indexes before the loading process and then recreate them afterwards if the following applies: The number of the records that are loaded is big in comparison to the number of records that already exist in the (uncompressed) F fact table. For non-transactional InfoCubes, you must delete the indexes to be able to carry out parallel loading.
Tip 9:
When you load large quantities of data in an InfoCube, the number range buffer should be increased for the dimensions that are likely to have a high number of data sets.
To do this, proceed as follows. Use function module RSD_CUBE_GET to find the object name of the dimension that is likely to have a high number of data sets.
Function module settings:
I_INFOCUBE = 'Infocube name'
I_OBJVERS = 'A'
I_BYPASS_BUFFER = 'X'
The numbers for the dimensions are then contained in table 'E_T_DIME', column 'NUMBRANR'. If you enter 'BID' before this number, you get the relevant number range (for example BID0000053).
You can use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges for the dimensions used in BW if you enter BID*. You can use the object name that was determined beforehand to find the required number range.
By double-clicking this line, you get to the number range maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads. -
How to improve the picture on a Samsung TV from a mac pro?
I currently have a Samsung HDTV [ 1920x1080 ] connected via a VGA cable to my 2007/8 Mac Pro through a NVIDIA GeForce 8800 GT card.
Can I improve the sharpness of the Samsung by connecting via the HDMI/DVI sockets on the back?
Having said that the NVIDIA doesn't have a HDMI/DVI output [ long thin USB type connector ] so presumably I would I need a converter.
Maybe a new video card?Can I improve the sharpness of the Samsung by connecting via the HDMI/DVI sockets on the back?
VGA is purely analog, so the digital out put of your computer has to be converted to analog and then back to digital when it gets to the TV. DVI can carry both analog and digital; most likely it will use a digital connection and should be much sharper. Without the model number of your TV it's hard to say.
Having said that the NVIDIA doesn't have a HDMI/DVI output long thin USB type connector so presumably I would I need a converter.
If your TV only has an HDMI connection than yes you need a DVI to HDMI cable: http://store.apple.com/us/product/TR842LL/A?mco=MTY3ODQ5OTY. -
How to improve the run time of this query
Is there any way to improve this query,
I have a table which SR_WId with atleast one subtype as 'Break Fix', 'Break/Fix','Break Fix/Corrective Maint','Break Fix-Corrective Maint'. then i can have other subtype as 'Break Fix', 'Break/Fix', 'Follow Up', 'Follw-Up','T&'||'M Break Fix','Break Fix/Corrective Maint','Break Fix-Corrective Maint'.
Let me know if this is okay or to modify it.
SELECT DISTINCT A.SR_WID AS SR_WID
FROM WC_SR_ACT_SMRY_FS A
WHERE EXISTS
(SELECT NULL FROM WC_SR_ACT_SMRY_FS B
WHERE B.ACT_TYPE = 'Maintenance'
AND B.ACT_SUBTYPE in ('Break Fix', 'Break/Fix','Break Fix/Corrective Maint','Break Fix-Corrective Maint') AND B.NBR_OF_ACTIVITIES = 1 AND B.SR_WID = A.SR_WID
GROUP BY B.SR_WID
HAVING COUNT(B.SR_WID) >= 1)
AND A.ACT_TYPE = 'Maintenance'
AND A.ACT_SUBTYPE IN ('Break Fix', 'Break/Fix', 'Follow Up', 'Follw-Up','T&'||'M Break Fix','Break Fix/Corrective Maint','Break Fix-Corrective Maint')SELECT DISTINCT A.SR_WID AS SR_WID
FROM WC_SR_ACT_SMRY_FS A
WHERE EXISTS(
SELECT NULL
FROM WC_SR_ACT_SMRY_FS B
WHERE B.ACT_TYPE = 'Maintenance'
AND B.ACT_SUBTYPE IN
('Break Fix',
'Break/Fix',
'Break Fix/Corrective Maint',
'Break Fix-Corrective Maint')
AND B.NBR_OF_ACTIVITIES = 1
AND B.SR_WID = A.SR_WID
GROUP BY B.SR_WID
HAVING COUNT(B.SR_WID) >= 1 )
AND A.ACT_TYPE = 'Maintenance'
AND A.ACT_SUBTYPE IN
('Break Fix',
'Break/Fix',
'Follow Up',
'Follw-Up',
'T&' || 'M Break Fix',
'Break Fix/Corrective Maint',
'Break Fix-Corrective Maint');First of all, you can omit the GROUP BY and HAVING part in the sub-select -
we already know thath the only value for B.SR_WID can be A.SR_WID and
COUNT(B.SR_WID) will always be at least 1, otherwise the sub-select will
return nothing.
As a second step, you could transform EXISTS() into IN():
SELECT DISTINCT SR_WID
FROM WC_SR_ACT_SMRY_FS A
WHERE SR_WID IN(
SELECT B.SR_WID
FROM WC_SR_ACT_SMRY_FS B
WHERE B.ACT_TYPE = 'Maintenance'
AND B.ACT_SUBTYPE IN
('Break Fix',
'Break/Fix',
'Break Fix/Corrective Maint',
'Break Fix-Corrective Maint')
AND B.NBR_OF_ACTIVITIES = 1)
AND ACT_TYPE = 'Maintenance'
AND ACT_SUBTYPE IN
('Break Fix',
'Break/Fix',
'Follow Up',
'Follw-Up',
'T&' || 'M Break Fix',
'Break Fix/Corrective Maint',
'Break Fix-Corrective Maint'); -
How to improve the speed of creating multiple strings from a char[]?
Hi,
I have a char[] and I want to create multiple Strings from the contents of this char[] at various offsets and of various lengths, without having to reallocate memory (my char[] is several tens of megabytes large). And the following function (from java/lang/String.java) would be perfect apart from the fact that it was designed so that only package-private classes may benefit from the speed improvements it offers:
// Package private constructor which shares value array for speed.
String(int offset, int count, char value[]) {
this.value = value;
this.offset = offset;
this.count = count;
}My first thought was to override the String class. But java.lang.String is final, so no good there. Plus it was a really bad idea to start with.
My second thought was to make a java.lang.FastString which would then be package private, and could access the string's constructor, create a new string and then return it (thought I was real clever here) but no, apparently you cannot create a class within the package java.lang. Some sort of security issue.
I am just wondering first if there is an easy way of forcing the compiler to obey me, or forcing it to allow me to access a package private constructer from outside the package. Either that, or some sort of security overrider, somehow.My laptop can create and garbage collect 10,000,000 Strings per second from char[] "hello world". That creates about 200 MB of strings per second (char = 2B). Test program below.
A char[] "tens of megabytes large" shouldn't take too large a fraction of a second to convert to a bunch of Strings. Except, say, if the computer is memory-starved and swapping to disk. What kind of times do you get? Is it at all possible that there is something else slowing things down?
using (literally) millions of charAt()'s would be
suicide (code-wise) for me and my program."java -server" gives me 600,000,000 charAt()'s per second (actually more, but I put in some addition to prevent Hotspot from optimizing everything away). Test program below. A million calls would be 1.7 milliseconds. Using char[n] instead of charAt(n) is faster by a factor of less than 2. Are you sure millions of charAt()'s is a huge problem?
public class t1
public static void main(String args[])
char hello[] = "hello world".toCharArray();
for (int n = 0; n < 10; n++) {
long start = System.currentTimeMillis();
for (int m = 0; m < 1000 * 1000; m++) {
String s1 = new String(hello);
String s2 = new String(hello);
String s3 = new String(hello);
String s4 = new String(hello);
String s5 = new String(hello);
long end = System.currentTimeMillis();
System.out.println("time " + (end - start) + " ms");
public class t2
static int global;
public static void main(String args[])
String hello = "hello world";
for (int n = 0; n < 10; n++) {
long start = System.currentTimeMillis();
for (int m = 0; m < 10 * 1000 * 1000; m++) {
global +=
hello.charAt(0) + hello.charAt(1) + hello.charAt(2) +
hello.charAt(3) + hello.charAt(4) + hello.charAt(5) +
hello.charAt(6) + hello.charAt(7) + hello.charAt(8) +
hello.charAt(9);
global +=
hello.charAt(0) + hello.charAt(1) + hello.charAt(2) +
hello.charAt(3) + hello.charAt(4) + hello.charAt(5) +
hello.charAt(6) + hello.charAt(7) + hello.charAt(8) +
hello.charAt(9);
long end = System.currentTimeMillis();
System.out.println("time " + (end - start) + " ms");
} -
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
How to improve the performance of one program in one select query
Hi,
I am facing performance issue in one program. I have given some part of the code of the program.
it is taking much time below select query. How to improve the performance.
Quick response is highly appreciated.
Program code
DATA: BEGIN OF t_dels_tvpod OCCURS 100,
vbeln LIKE tvpod-vbeln,
posnr LIKE tvpod-posnr,
lfimg_diff LIKE tvpod-lfimg_diff,
calcu LIKE tvpod-calcu,
podmg LIKE tvpod-podmg,
uecha LIKE lips-uecha,
pstyv LIKE lips-pstyv,
xchar LIKE lips-xchar,
grund LIKE tvpod-grund,
END OF t_dels_tvpod,
DATA: l_tabix LIKE sy-tabix,
lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
ls_dels_tvpod LIKE t_dels_tvpod.
SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
FOR ALL ENTRIES IN t_dels_tvpod
WHERE vbeln = t_dels_tvpod-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart NOT IN r_del_types_exclude.
Waiting for quick response.
Best regards,
BDPBansidhar,
1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
2) Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
Try doing something like this.
TYPES: BEGIN OF ty_del_types_exclude,
sign(1) TYPE c,
option(2) TYPE c,
low TYPE likp-lfart,
high TYPE likp-lfart,
END OF ty_del_types_exclude.
DATA: w_del_types_exclude TYPE ty_del_types_exclude,
t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
t_dels_tvpod_tmp LIKE TABLE OF t_dels_tvpod .
IF NOT t_dels_tvpod[] IS INITIAL.
* Assuming that I would like to exclude delivery types 'LP' and 'LPP'
CLEAR w_del_types_exclude.
REFRESH t_del_types_exclude.
w_del_types_exclude-sign = 'E'.
w_del_types_exclude-option = 'EQ'.
w_del_types_exclude-low = 'LP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
w_del_types_exclude-low = 'LPP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
t_dels_tvpod_tmp[] = t_dels_tvpod[].
SORT t_dels_tvpod_tmp BY vbeln.
DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
COMPARING
vbeln.
SELECT vbeln
FROM likp
INTO TABLE lt_dels_tvpod
FOR ALL ENTRIES IN t_dels_tvpod_tmp
WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart IN t_del_types_exclude.
ENDIF. -
How to improve the performance of the query
Hi,
Help me by giving tips how to improve the performance of the query. Can I post the query?
SureshBelow is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
SELECT rt.awb_number,
ar.activity_id as task_id,
t.assignee_org_unit_id,
t.task_type_code,
ar.request_id
FROM activity_task ar,
request_task rt,
task t
WHERE ar.activity_id =t.task_id
AND ar.request_id = rt.request_id
AND ar.complete_status != 'act.stat.closed'
AND t.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND ar.parent_task_id not in (SELECT tt.task_id
FROM task tt
WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND rt.awb_number is not null
ORDER BY rt.awb_numberCheers
Sarma. -
HI All, How to improve the performance in given query?
HI All,
How to improve the performance in given query?
Query is..
PARAMETERS : p_vbeln type lips-vbeln.
DATA : par_charg TYPE LIPS-CHARG,
par_werks TYPE LIPS-WERKS,
PAR_MBLNR TYPE MSEG-MBLNR .
SELECT SINGLE charg
werks
INTO (par_charg, par_werks)
FROM lips
WHERE vbeln = p_vbeln.
IF par_charg IS NOT INITIAL.
SELECT single max( mblnr )
INTO par_mblnr
FROM mseg
WHERE bwart EQ '101'
AND werks EQ par_werks (index on werks only)
AND charg EQ par_charg.
ENDIF.
Regards
SteveHi steve,
Can't you use the material in your query (and not only the batch)?
I am assuming your system has an index MSEG~M by MANDT + MATNR + WERKS (+ other fields). Depending on your system (how many different materials you have), this will probably speed up the query considerably.
Anyway, in our system we ended up by creating an index by CHARG, but leave as a last option, only if selecting by matnr and werks is not good enough for your scenario.
Hope this helps,
Rui Dantas -
Inner Join. How to improve the performance of inner join query
Inner Join. How to improve the performance of inner join query.
Query is :
select f1~ablbelnr
f1~gernr
f1~equnr
f1~zwnummer
f1~adat
f1~atim
f1~v_zwstand
f1~n_zwstand
f1~aktiv
f1~adatsoll
f1~pruefzahl
f1~ablstat
f1~pruefpkt
f1~popcode
f1~erdat
f1~istablart
f2~anlage
f2~ablesgr
f2~abrdats
f2~ableinh
from eabl as f1
inner join eablg as f2
on f1ablbelnr = f2ablbelnr
into corresponding fields of table it_list
where f1~ablstat in s_mrstat
%_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
I wanted to modify the query, since its taking lot of time to load the data.
Please suggest : -
Treat this is very urgent.Hi Shyamal,
In your program , you are using "into corresponding fields of ".
Try not to use this addition in your select query.
Instead, just use "into table it_list".
As an example,
Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
So try not to give "into corresponding fields" in your query.
Regards,
SP. -
Please help me how to improve the performance of this query further.
Hi All,
Please help me how to improve the performance of this query further.
Thanks.Hi,
this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
and we're left with nothing.
Start by reading this blog post: Kyle Hailey &raquo; Power of DISPLAY_CURSOR
and applying this knowledge to your case.
Best regards,
Nikolay
Maybe you are looking for
-
I can't sync my iphone5 to a new laptop?
I've had my iphone5 for nearly a year, and previously it was sycning to an at home computer. But one i bought my laptop I wanted it to sync to a new itunes library (i downloaded all new music). But when I plug in my iphone it sayst that it can't read
-
Standby Database with Recovery Manager
Logical standby database resides on another server. some archive log files are not synced with standby database I want to recreate standby database with rman Any step by step documentation ?
-
Ejecting USB 2 External hard disks?
I've just got some Freecom Toughdrive 250GB USB 2 drives. I want to check something about ejecting them and I can't seem to find info about this, even from Freecom. When you eject a USB 2 hard disk, should the disk then power down? My Freecom Toughdr
-
Position d'une cellule dans une table ou liste multicolonne...
Bonjour, Comment récupérer la position de la cellule choisie dans une table ou une liste multicolonne lors d'un évènement (par exemple) ? merci.
-
Unable to add payment method / "Please select a payment provider"
Hi, been using spotify premium/unlimited for many months now and never had this problem. When trying to renew the subscription the following message appears: "Please select a payment provider". However, it doesn't let me do that as usual and the foll