Bad performance. How to improve?
Hi!
I have two tables (e.g. T1 and T2) both of the structure:CREATE TABLE T1 (
"ID_G5PZG" VARCHAR2(30 BYTE) NOT NULL ENABLE,
"ID_G5DOK" VARCHAR2(30 BYTE) NOT NULL ENABLE,
"STATUS" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
CONSTRAINT "G5DOK_G5PZ_PK" PRIMARY KEY ("ID_G5PZG", "ID_G5DOK"),
CONSTRAINT "G5DOK_G5PZ_G5DOK_FK" FOREIGN KEY ("ID_G5DOK") REFERENCES "G5DOK" ("ID") ENABLE,
CONSTRAINT "G5DOK_G5PZ_G5PZG_FK" FOREIGN KEY ("ID_G5PZG") REFERENCES "G5PZG" ("ID") ENABLE
);I run a query:SELECT *
FROM G5DOK_G5PZG_RKRG g5
full JOIN ROG_TEMP_G5DOK_G5PZG_RKRG rt
on rt.ID_G5DOK = g5.ID_G5DOK
and rt.ID_G5PZG = g5.ID_G5PZG
and rt.STATUS = 1 AND g5.STATUS = 1
WHERE rt.STATUS IS NULL
OR g5.STATUS IS NULL;Explain plan shows very costly NESTED LOOP and the query takes more than 3 minutes to complete with cardinality of 42248 rows in both tables. How may I improve the performance?
--------EDIT--------
I'd like to find rows inT1 that does not exists in T2 and vice versa.
No, I haven't.
EXPLAIN PLAN is:"PLAN_TABLE_OUTPUT"
"Plan hash value: 3694803278"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 44360 | 5198K| 9396 (1)| 00:01:53 |"
"| 1 | VIEW | | 44360 | 5198K| 9396 (1)| 00:01:53 |"
"| 2 | UNION-ALL | | | | | |"
"|* 3 | FILTER | | | | | |"
"| 4 | NESTED LOOPS OUTER | | 42248 | 1650K| 9340 (1)| 00:01:53 |"
"| 5 | TABLE ACCESS FULL | G5DOK_G5PZG_RKRG | 42248 | 825K| 56 (2)| 00:00:01 |"
"| 6 | VIEW | | 1 | 20 | 0 (0)| 00:00:01 |"
"|* 7 | FILTER | | | | | |"
"|* 8 | TABLE ACCESS BY INDEX ROWID| ROG_TEMP_G5DOK_G5PZG_RKRG | 1 | 20 | 0 (0)| 00:00:01 |"
"|* 9 | INDEX RANGE SCAN | TMP_G5DOK_G5PZ_G5DOK_FK_I | 42248 | | 0 (0)| 00:00:01 |"
"|* 10 | FILTER | | | | | |"
"| 11 | TABLE ACCESS FULL | ROG_TEMP_G5DOK_G5PZG_RKRG | 42248 | 825K| 56 (2)| 00:00:01 |"
"|* 12 | FILTER | | | | | |"
"|* 13 | TABLE ACCESS BY INDEX ROWID | G5DOK_G5PZG_RKRG | 1 | 20 | 0 (0)| 00:00:01 |"
"|* 14 | INDEX UNIQUE SCAN | G5DOK_G5PZ_PK | 1 | | 0 (0)| 00:00:01 |"
"Predicate Information (identified by operation id):"
" 3 - filter(""RT"".""STATUS"" IS NULL)"
" 7 - filter(""G5"".""STATUS""=1)"
" 8 - filter(""RT"".""STATUS""=1 AND ""RT"".""ID_G5PZG""=""G5"".""ID_G5PZG"")"
" 9 - access(""RT"".""ID_G5DOK""=""G5"".""ID_G5DOK"")"
" 10 - filter( NOT EXISTS (SELECT /*+ UNNEST */ 0 FROM ""G5DOK_G5PZG_RKRG"" ""G5"" WHERE :B1=1 AND "
" ""G5"".""ID_G5DOK""=:B2 AND ""G5"".""ID_G5PZG""=:B3 AND ""G5"".""STATUS""=1))"
" 12 - filter(:B1=1)"
" 13 - filter(""G5"".""STATUS""=1)"
" 14 - access(""G5"".""ID_G5PZG""=:B1 AND ""G5"".""ID_G5DOK""=:B2)"Edited by: JackK on Nov 22, 2010 8:59 AM
Similar Messages
-
How to improve sorting performance
Hello
We are having bad performance for a query like this in:
select a.data
from a, b, c, d, e, f
where a.fk1 = b.pk
and a.fk2 = c.pk
and a.fk3 = d.pk
and a.fk4 = e.pk
and a.fk5 = f.pk
order by a.data, b.data, c.data
each foreign key has its index
we have oracle 9i
Without the order by clause, the cost is 3042; with the order by clause, it is 39842.
table 'a' is going to have millions of records, tables 'b' and 'c' are going to have thousands, and the others just hundreds.
How can i improve the performance of this query?
Thanks in advance!Hi
This is the explain plan; sorry, the values don't match with the ones i posted but the idea is the same:
selecting 6 fields and sorting by 5 of them:
DESCRIPTION OWNER OBJECT NAME COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = CHOOSE 49644 675390 59434320
SORT ORDER BY 49644 675390 59434320
NESTED LOOPS 5404 675390 59434320
NESTED LOOPS 5404 675390 56057370
HASH JOIN 5404 675390 54031200
TABLE ACCESS FULL OMEGA GN_T_EMPRESA 2 165 1485
HASH JOIN 5379 675390 47952690
TABLE ACCESS FULL OMEGA GN_T_CALENDARIO 55 22695 272340
HASH JOIN 4135 675445 39851255
TABLE ACCESS FULL OMEGA GN_T_CALENDARIO 55 22695 272340
HASH JOIN 3090 675445 31745915
INDEX FAST FULL SCAN OMEGA CLNTE_PK 92 748225 3741125
HASH JOIN 1792 675445 28368690
TABLE ACCESS FULL OMEGA GNC_T_MERCADO 2 47 282
HASH JOIN 1776 675445 24316020
INDEX FULL SCAN OMEGA INDCDRES_PK 1 36 108
TABLE ACCESS FULL OMEGA GNC_T_OMG_FACTURACION 1762 675445 22289685
INDEX UNIQUE SCAN OMEGA TPO_PRPDAD_PK 1 3
INDEX UNIQUE SCAN OMEGA SCTRZCION_DNE_PK 1 5
Selecting 28 fields (some of them are calculations) and sorting by 8 of them:
DESCRIPTION OWNER OBJECT NAME COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = CHOOSE 104577 675390 114816300
SORT ORDER BY 104577 675390 114816300
HASH JOIN 10909 675390 114816300
TABLE ACCESS FULL OMEGA GN_T_EMPRESA 2 165 1485
HASH JOIN 10859 675390 108737790
TABLE ACCESS FULL OMEGA GNC_T_TIPO_PROPIEDAD 2 100 800
HASH JOIN 10811 675390 103334670
TABLE ACCESS FULL OMEGA GN_T_SECTORIZACION_DANE 66 27046 676150
HASH JOIN 8403 675390 86449920
TABLE ACCESS FULL OMEGA GN_T_CALENDARIO 55 22695 431205
HASH JOIN 6328 675390 73617510
TABLE ACCESS FULL OMEGA GNC_T_CLIENTE 610 748225 9726925
HASH JOIN 3472 675390 64837440
TABLE ACCESS FULL OMEGA GN_T_CALENDARIO 55 22695 272340
HASH JOIN 1814 675445 56737380
TABLE ACCESS FULL OMEGA GNC_T_MERCADO 2 47 423
HASH JOIN 1787 675445 50658375
INDEX FULL SCAN OMEGA INDCDRES_PK 1 36 108
TABLE ACCESS FULL OMEGA GNC_T_OMG_FACTURACION 1762 675445 48632040
It decides not to use fk indexes.. until there cost is 10909 and after sort the cost is 104577
Here is the query:
select emp.d_codigo_fssri_empresa as d_codigo_fssri_gas
, exp.n_mes as n_mes_reporte
, exp.n_anio as n_anio_reporte
, vig.n_mes as n_mes_consumo
, vig.n_anio as n_anio_consumo
, mer.d_codigo_sector_cons as d_sector_usuario
, 30 as n_dias_facturados
, pckb_subg.fub_rango_consumo(fact.n_consumo_m3, decode(prop.d_codigo_tipo_propiedad, 's', prop.n_propiedades, 1) ) as n_rango_consumo
, nvl(fact.n_consumo_m3,0) as n_consumo
, decode(prop.d_codigo_tipo_propiedad, 's', prop.n_propiedades, 1) as n_usuarios
, to_char( nvl(fact.n_cargo_consumo_rango1,0), 'fm999999999999990d99') as n_cu
, to_char((nvl(fact.n_cargo_consumo_rango1,0) - nvl(fact.n_tarifa_aplicada_subsidio,0)), 'fm999999999999d99') as n_tarifa_aplicada
, 0 as n_factor
, (nvl(fact.n_facturacion_consumo,0) + nvl(fact.n_cargo_fijo,0)) as n_facturacion
, nvl(fact.n_valor_subsidio ,0) as n_subsidio
, 0 as n_ajuste
, emp.d_codigo_fssri_empresa as d_incumbente
, null as d_observaciones
, mer.d_codigo_tarifa as d_tarifa--fact.d_tarifa
, exp.f_fecha as f_expedicion--fact.f_expedicion
, fact.d_factura
, cli.d_poliza as d_niu --fact.d_nui
, nvl(prop.n_propiedades, 0) as n_inquilinatos
, sectd.d_departamento_dane as d_departamento
, sectd.d_municipio_dane as d_municipio
, sectd.d_poblacion_dane as d_poblado
, sectd.d_codigo_dane as d_codigo_dane_r
, fact.k_facturacion
from gnc_t_omg_facturacion fact,
gnc_t_indicadores_facturacion ind,
gnc_t_cliente cli,
gn_t_sectorizacion_dane sectd,
gnc_t_mercado mer,
gnc_t_tipo_propiedad prop,
gn_t_empresa emp,
omg_v_fecha_expedicion exp,
omg_v_fecha_vigor vig
where ind.k_indicadores = fact.r_indicador_consumo
and cli.k_cliente = fact.r_cliente
and sectd.k_sectorizacion_dane = fact.r_sectorizacion_dane
and mer.k_mercado = fact.r_mercado
and prop.k_tipo_propiedad = fact.r_tipo_propiedad
and emp.k_empresa = fact.r_empresa
and exp.k_calendario = fact.r_fecha_expedicion
and vig.k_calendario = fact.r_fecha_vigor
order by d_codigo_fssri_gas
, n_anio_reporte
, n_mes_reporte
, n_anio_consumo
, n_mes_consumo
, d_sector_usuario
, n_rango_consumo
, n_tarifa_aplicada
I appreciate all your advice, thank you -
How to improve Query Performance
Hi Friends...
I Want to improve query performance.I need following things.
1.What is the process to findout the performance?. Any transaction code's and how to use?.
2.How can I know whether the query is running good or bad ,ie. in performance praspect.
3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
Eg..
Eg 1. Need to create aggregates.
Solution: where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
Any chenges I need to do in Development?.Because I'm in Production system.
So please tell me solution for my questions.
Thanks
Ganga
Message was edited by: Ganga Nhi ganga
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
CHEERS
RAVI -
How to improve ODS archiving performance ?
Hello,
We are having a huge ODS with nearby 800 millions records. We are
planning to archive this ODS as soon as possible. We've begun to
archive the ODS month by month, but as you can imagine the write/delete
(SARA) process is very slow (full table scan).
We have done a test with 7 million records to delete :
It took us :
2 hours : SARA Write process (create archive files)
12 hours : SARA Delete process (execute delete program)
Our ODS contains 45 millions records per month, we can launch the archive jobs in our production environnement only during the week end.
It will take us months
to complete our archiving scenario.
How can we improve the archiving process ?
We don't really need the data that are beyond 18 months, is there a
better way to delete those data : selective deletion ?
Thank you in advance for your answers.
Business Information Warehouse 3.0B
Message was edited by:
ramos alexandroHi,
When you are running archiving process...queries might not run or will have bad performance...even a lock can happen...so..Just a idea,
As you said..you require only...Last 18 months data for Query..
So Create/make a new Shadow ODS ZSDO0002 ..should have similar Charateristics and KF.
Then make full Load from Your ODS(which you have to archive)..with selection in Infopackage for last 18 months only.
Make a INIT.
Then Latest Delta should come to this..ODS (ZSDO0002)...not to your old ODS.
Now, Copy the Queries from your old ODS to new OLD...with RSZC
Now archive your old ODS.
Regards,
San!
If helpful assign points -
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
How to improve the performance of adobe forms
Hi,
Please give me some suggestions as to how to improve the performance of adobe form?
Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
one it is hanging.
I read about Wizard form design approach, how to use the same here.
Thanks,
AravindHi Otto,
The form is created using HCM forms and processes. I' am performing user events in the form.
User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
I was reading ways to improve performance during re-rendering given below.
http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
Let me know if you need further details.
Thanks,
Aravind -
Performance blocker on cFP-2020: file-I/O! How to improve?
Hi all,
after resolving my serial-communication problems I still have a performance problem. The code is way too slow which causes e.g. the ftp-server on the fieldpoint to not respond anymore to requests from my PC. I always get timeouts. I also frequently get loop-finished-late events within my two state-machines. I have now used the timing and performance monitor to sse which vi is taking so much time. The result: A file-I/O vi that writes data and log-entries into three different log-files. With a former verion I simply used one vi that appends a string to an existing file. However, since this function disappeared with LV8.2 I had to rewrite the code to use the following sequence of LV-functions:
- file open
- set pointer to the end
- write string
- file close
My vi shich calls this sequence is horribly slow - execution time per run is about 200ms and the top of the list in the performance monitor. Are there any suggestions on how to improve this code? I simply want to add a string to the end of the log-file......
The vi is attached. There are two features in the code which are not self-explanatory: The first sub-vi generates a new file if is used for longer than a preset time (15 minutes in my case). So the create-time of the file is stored in the filename and whenever the current time exceeds create-time plus 15 min. A new file name is created. For simplicity, the name is stored only for the first of the three log-files. The other two are created by string operations from the first filename. Second, whenever a file is "created", that means is does not exist yet, a data-header is written to the file, before data is appended.
Can you see simple improvements here that will accelerate this code? Maybe open the file only once and then append data subsequently and only close it when a new file is created? But I do not need all three files all the times, there may be situations where only one file is needed, and the others need not be created at all.
Thanks,
Olaf
Attachments:
makedatalogfiles.vi 42 KBRavens Fan wrote:
I think moving the open file, move to end of file, and close file out of the loop would certainly help. These functions could be associated with or built into your "determine new file" VI. Since the file paths get passed into the loops, you could pass them through with shift registers so that you can close them after the loops end.
One other thing to look at is your initialize array and insert into array functions. I believe insert into array is one of the costlier functions. Build array would be better. And initializing a much larger array and using replace array subset is better yet. But if you wind up with more elements than you had originally initialized for, you will have to use build array to enlarge it. I would recommend searching the Labview forum for insert into array, build array, and replace array subset for threads that do a better job explaining the differences and advantages of each.
Thanks, that improved the performance of this vi by about two orders of magnitude. The application is now much more stable.
However, I cannot connect to the cFP-2020 anymore by ftp. I even swithed the fieldpoint to boot without vi.
To be specific, I can access the cFP and cd into all directories except for the directory with the data. I assume that there is a large
amount of files in it now, but used to work before even with lots of files. The only thing that might be not so nice is that there is a space
in the folder name, but that has been like that for years now and used to work.
Is there any reason (corrupted file or something like that) that can cause the ftp to fail on this specific directory?
Thanks,
Olaf
...at least I am very close now to a satisfying and running system.... :-) -
How to improve performance of MediaPlayer?
I tried to use the MediaPlayer with a On2 VP6 flv movie.
Showing a video with a resolution of 1024x768 works.
Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
Does somebody know a solution for this problems?
Cheers
masimduplicate thread..
How to improve performance of attached query -
How to Improve performance issue when we are using BRM LDB
HI All,
I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
Please suggest me how to improve the performance issue.
Thanks in advance
ChakradharModerator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob -
How to Improve the performance in Variable Selection Screen.
Hi,
In Query Level we have Variable " User entry Defalt Valu". User want select particular value when he press "F4" it's take hours time how to improve the performance in Varaible Selection Screen.
Thanks in Advance.
Regards,
Venkat.Dear Venkat.
You please try the following steps:
1. Say the InfoObject is 0EMPLOYEE against which you have created the variable, which user is trying to select value against, when they execute the report.
2. Goto RSA1-> InfoObject tab-> Select InfoObject 0EMPLOYEE.
3. Selcet the following options:
Query Execution Filter Val. Selectn - 'Only Posted Value for Navigation'
Filter Value Repr. At Query Exec. - 'Selector Box Without Values'
Please let me know if there is any more issue. Feel free to raise further concern
Thnx,
Sukdev K -
How to improve the performance of one program in one select query
Hi,
I am facing performance issue in one program. I have given some part of the code of the program.
it is taking much time below select query. How to improve the performance.
Quick response is highly appreciated.
Program code
DATA: BEGIN OF t_dels_tvpod OCCURS 100,
vbeln LIKE tvpod-vbeln,
posnr LIKE tvpod-posnr,
lfimg_diff LIKE tvpod-lfimg_diff,
calcu LIKE tvpod-calcu,
podmg LIKE tvpod-podmg,
uecha LIKE lips-uecha,
pstyv LIKE lips-pstyv,
xchar LIKE lips-xchar,
grund LIKE tvpod-grund,
END OF t_dels_tvpod,
DATA: l_tabix LIKE sy-tabix,
lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
ls_dels_tvpod LIKE t_dels_tvpod.
SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
FOR ALL ENTRIES IN t_dels_tvpod
WHERE vbeln = t_dels_tvpod-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart NOT IN r_del_types_exclude.
Waiting for quick response.
Best regards,
BDPBansidhar,
1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
2) Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
Try doing something like this.
TYPES: BEGIN OF ty_del_types_exclude,
sign(1) TYPE c,
option(2) TYPE c,
low TYPE likp-lfart,
high TYPE likp-lfart,
END OF ty_del_types_exclude.
DATA: w_del_types_exclude TYPE ty_del_types_exclude,
t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
t_dels_tvpod_tmp LIKE TABLE OF t_dels_tvpod .
IF NOT t_dels_tvpod[] IS INITIAL.
* Assuming that I would like to exclude delivery types 'LP' and 'LPP'
CLEAR w_del_types_exclude.
REFRESH t_del_types_exclude.
w_del_types_exclude-sign = 'E'.
w_del_types_exclude-option = 'EQ'.
w_del_types_exclude-low = 'LP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
w_del_types_exclude-low = 'LPP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
t_dels_tvpod_tmp[] = t_dels_tvpod[].
SORT t_dels_tvpod_tmp BY vbeln.
DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
COMPARING
vbeln.
SELECT vbeln
FROM likp
INTO TABLE lt_dels_tvpod
FOR ALL ENTRIES IN t_dels_tvpod_tmp
WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart IN t_del_types_exclude.
ENDIF. -
HOW TO IMPROVE PERFORMANCE ON SUM FUNCTION IN INLINE SQL QUERY
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advance989209 wrote:
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advancea) Lac is not an international unit, so is not understood by everyone. This is an international forum so please use international units.
b) Please read the FAQ: {message:id=9360002} to learn how to format your question correctly for people to help you.
c) As your question relates to performance tuning, please also read the two threads linked to in the FAQ: {message:id=9360003} for an idea of what specific information you need to provide for people to help you tune your query. -
How to improve the performance of the attached query, Please help
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1Hello,
This has really nothing to do with the Oracle Forms product.
Please, send the SQL or/and PL/SQL questions in the corresponding forums.
Francois -
How to improve the performance of the application
Hi,
We have customized the standard SRM BSP application and integrated to portal.
Now it has got some performence issues. It is taking time to load the page and also while navigating between the views.
We have done the performence tuning for the application to the maximum extent. But the problem is while loading the application, its taking time.
Can anyone suggest how to improve the performence of the Application.
Thanks & Regards
WarunThe system configuration is more than enough to run java applications.
You are probalbly doing time-consuming operations in the event thread. Which blocks the event thread and the gui seems not to be responding. If you you have a very bad design.
Use a thread for time consuming operations. -
How to improve the performance of the abap program
hi all,
I have created an abap program. And it taking long time since the number of records are more. And can anyone let me know how to improve the performance of my abap program.
Using se30 and st05 transaction.
can anyone help me out step by step
regds
harithaHi Haritha,
->Run Any program using SE30 (performance analysis)
Note: Click on the Tips & Tricks button from SE30 to get performance improving tips.
Using this you can improve the performance by analyzing your code part by part.
->To turn runtim analysis on within ABAP code insert the following code
SET RUN TIME ANALYZER ON.
->To turn runtim analysis off within ABAP code insert the following code
SET RUN TIME ANALYZER OFF.
->Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
->Avoid for all entries in JOINS
->Try to avoid joins and use FOR ALL ENTRIES.
->Try to restrict the joins to 1 level only ie only for tables
->Avoid using Select *.
->Avoid having multiple Selects from the same table in the same object.
->Try to minimize the number of variables to save memory.
->The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
->Avoid creation of index as far as possible
->Avoid operators like <>, > , < & like % in where clause conditions
->Avoid select/select single statements in loops.
->Try to use 'binary search' in READ internal table. -->Ensure table is sorted before using BINARY SEARCH.
->Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
->Avoid using ORDER BY in selects
->Avoid Nested Selects
->Avoid Nested Loops of Internal Tables
->Try to use FIELD SYMBOLS.
->Try to avoid into Corresponding Fields of
->Avoid using Select Distinct, Use DELETE ADJACENT
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Check also http://service.sap.com/performance
and
books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
edited by,
Naveenan -
How to improve the performance of serialization/deserialization?
Hi, Friends,
I have a question about how to improve the performance of serialization/deserialization.
When an object is serialized, the entire tree of objects rooted at the object is also serialized. When it is deserialized, the tree is reconstructed. For example, suppose a serializable Father object contains (a serializable field of) an array of Child objects. When a Father object is serialized, so is the array of Child objects.
For the sake of performance consideration, when I need to deserialize a Father object, I don't want to deserialize any Child object. However, I should be able to know that Father object has children. I should also be able to deserialize any child of that Father object when necessary.
Could you tell me how to achieve the above idea?
Thanks.
YoubinYou could try something like this...
import java.io.*;
import java.util.*;
class Child implements Serializable {
int id;
Child(int _id) { id=_id; }
public String toString() { return String.valueOf(id); }
class Father implements Serializable
Child[] children = new Child[10];
public Father() {
Arrays.fill(children, new Child(1001));
public void readObject(ObjectInputStream stream)
throws IOException, ClassNotFoundException
int numchildren = stream.readInt();
for(int i=0; i<numchildren; i++)
children[i] = (Child)stream.readObject();
stream.close();
public void writeObject(ObjectOutputStream stream) throws IOException
stream.writeInt(children.length);
for(int i=0; i<children.length; i++)
stream.writeObject(children);
stream.close();
Child[] getChildren() { return children; }
class FatherProxy
int numchildren;
String filename;
public FatherProxy(String _filename) throws IOException
filename = _filename;
ObjectInputStream ois =
new ObjectInputStream(new FileInputStream(filename));
numchildren = ois.readInt();
ois.close();
int getNumChildren() { return numchildren; }
Child[] getChildren() throws IOException, ClassNotFoundException
ObjectInputStream ois =
new ObjectInputStream(new FileInputStream(filename));
Father f = (Father)ois.readObject();
ois.close();
return f.getChildren();
public class fatherref
public static void main(String[] args) throws Exception
// create the serialized file
Father f = new Father();
ObjectOutputStream oos =
new ObjectOutputStream(new FileOutputStream("father.ser"));
oos.writeObject(f);
oos.close();
// read in just what is needed -- numchildren
FatherProxy fp = new FatherProxy("father.ser");
System.out.println("numchildren: " + fp.getNumChildren());
// do some processing
// you need the rest -- children
Child[] c = fp.getChildren();
System.out.println("children:");
for(int i=0; i<c.length; i++)
System.out.println("i " + i + ": " + c[i]);
Maybe you are looking for
-
I am using an imac OS X Lion 10.7.5 (11G63b) with processor 2.7 GHz Intel core i5, memory 4GB 1333 MHz DDR3 . Recently my imac is freeze and have a spinning wheel after couple hours working. Sometime it is automatically turn off. My disk space have 7
-
I am working on File to Http Scenario. I am sending one requist to Servlet. In runtime monitaring showing black flag. but request is not going to Servlet. How will i monitor Http Adapter. Thanks
-
Add Holiday Calendar to SharePoint
I'd like to add Holidays to our SharePoint calendar. We are using MOSS 2007 and Outlook 2003. I can't seem to figure out how to import an outlook calender with holidays to SharePoint calendars or how to impost and Excel holiday calendar to SharePoint
-
After Maverick, calendar not showing time of event
on Friday, my OS was updated to Maverick and now my calendar displys the events but no time on the month view. To see time of events I have to go to week view, which I do not like at all... Anyone knows how to get this resolved? The preferences doe n
-
Uploading 3rd Party Cert NOT Working Prime LMS 4.2
Hi all, I followed the next steps but when I tried to upload the 3rd party cert into the Prime LMS using SSL Utlity Script option 5 or 6, the process is stuck. I did not get a message similar to step 4 like: "introduce the location of the certificate