Performance issue with Indexing
Our users are experiencing the performance problem using Oracle 8.1.7 database. We are using Business object to display the data as required.
This is the below script using BO.
Can you please suggest me technically how to get resolved this issue?
Please let me know if you need any more information.
Thanks
This is an example of the typical type of query I use, it works o.k. but I think some indexing could make it run much quicker.
I think the TEST.SH_TYS____COMPL.CLOSED_TIME & TEST.SH_TYS____COMPL.CANCELLED_TIME are key fields that could be indexed to speed up the report.
Also I tend to do a lot of reports using the TEST.TYS__COMPL_CONFIGURATION.OWNING_CENTRE as a filter, this isn't that large a table but
uses a table join on TEST.TYS____COMPL.CID = TEST.TYS__COMPL_CONFIGURATION.CID so indexing the TEST.TYS____COMPL.CID field could
provide an improvement.
The TEST.TYS____COMPL.CREATE_DATE is another key field when we want to identify tickets raised during a period.
The TEST.TYS____COMPL.SUBCASE_OF is another key field we filter by.
The TEST.TYS____COMPL.CLEAR_CODE_OBJECT_FAMILY,TEST.TYS____COMPL.CLEAR_CODE_OBJECT & TEST.TYS____COMPL.CLEAR_CODE_ACTION fields may also be useful.
SELECT TEST.TYS____COMPL.COMPL_ID, TEST.TYS____COMPL.STATUS, TEST.TYS____COMPL.MODIFIED_DATE,
TEST.TYS____COMPL.CREATE_DATE, TEST.TYS____COMPL.SUBMITTER, TEST.TYS____COMPL.LAST_MODIFIED_BY,
TEST.TYS____COMPL.COMPL_ENQUIRY_DESCRIPTION, TEST.TYS____COMPL.COMPL_CLEAR_TARGET_TIME,
TEST.TYS____COMPL.CURRENT_ACTION, TEST.TYS____COMPL.RESTORE_UNITS, TEST.TYS____COMPL.CID,
TEST.TYS____COMPL.ELEMENT_ID, TEST.TYS____COMPL.SITE_LOCATION_ID, TEST.TYS____COMPL.SITE,
TEST.TYS____COMPL.PROACTIVITY, TEST.TYS____COMPL.INCIDENT_START_TIME,
TEST.TYS____COMPL.TOTAL_OUTAGE_TIME__SECS_, TEST.TYS____COMPL.TOTAL_TICKET_TIME__SECS_,
TEST.TYS____COMPL.TOTAL_COMPL_TIME__SECS_, TEST.TYS____COMPL.SEVERITY, TEST.TYS____COMPL.BACKUP_METHOD,
TEST.TYS____COMPL.ELEMENT, TEST.TYS____COMPL.SERVICE_TYPE, TEST.TYS____COMPL.ELEMENT_DESCRIPTION,
TEST.TYS____COMPL.ELEMENT_CATEGORY, TEST.TYS____COMPL.BACK_UP_END_DATE, TEST.TYS____COMPL.BACK_UP_START_DATE,
TEST.TYS____COMPL.INCIDENT_END_TIME, TEST.TYS____COMPL.CUST_ACCEPTED_CLOSURE_TIME,
TEST.TYS____COMPL.CLOSING_COMMENTS, TEST.TYS____COMPL.CLEAR_CODE, TEST.TYS____COMPL.ASSIGNED_TO,
TEST.TYS____COMPL.ASSIGNEE_LOGIN, TEST.TYS____COMPL.RESTORE_AMOUNT, TEST.TYS____COMPL.COMPL_REPORTABLE,
TEST.TYS____COMPL.SLA_ATTRIBUTABLE, TEST.TYS____COMPL.CDSP_OUTAGE__SECS_, TEST.TYS____COMPL.FORMAT,
TEST.TYS____COMPL.CONTRACT_SERVICE_LEVEL, TEST.TYS____COMPL.ASSIGNED_TEAM,
TEST.TYS____COMPL.REASON_FOR_CANCELLATION, TEST.TYS____COMPL.CLEAR_CODE_OBJECT,
TEST.TYS____COMPL.CLEAR_CODE_ACTION, TEST.TYS____COMPL.CLOSE_CODE_OPTION,
TEST.TYS____COMPL.CLEAR_CODE_OBJECT_FAMILY, TEST.TYS____COMPL.SERVICE_IMPACT, TEST.SH_TYS____COMPL.CLOSED_TIME,
TEST.SH_TYS____COMPL.CLOSED_USER, TEST.SH_TYS____COMPL.CANCELLED_TIME, TEST.SH_TYS____COMPL.CANCELLED_USER,
TEST.TYS__COMPL_CONFIGURATION.OWNING_CENTRE, TEST.TYS__COMPL_CONFIGURATION.CONTRACT,
TEST.TYS____COMPL.SOURCE, TEST.TYS____COMPL.SUBCASE_OF, TEST.SH_TYS____COMPL.OLD_COMPL_ID,
TEST.TYS____COMPL.SUPPLIER_1, TEST.TYS____COMPL.SUPPLIER_4, TEST.TYS____COMPL.SUPPLIER_3,
TEST.TYS____COMPL.SUPPLIER_2, TEST.TYS____COMPL.TIME_SUPPLIER_ADVISED_BTSS_1,
TEST.TYS____COMPL.TIME_SUPPLIER_ADVISED_BTSS_2, TEST.TYS____COMPL.TIME_SUPPLIER_ADVISED_BTSS_3,
TEST.TYS____COMPL.TIME_SUPPLIER_ADVISED_BTSS_4, TEST.TYS____COMPL.SUPPLIER_1_REF,
TEST.TYS____COMPL.SUPPLIER_2_REF, TEST.TYS____COMPL.SUPPLIER_3_REF, TEST.TYS____COMPL.SUPPLIER_4_REF,
TEST.TYS____COMPL.PASSED_TO_SUPPLIER_AT_1, TEST.TYS____COMPL.PASSED_TO_SUPPLIER_AT_2,
TEST.TYS____COMPL.PASSED_TO_SUPPLIER_AT_3, TEST.TYS____COMPL.PASSED_TO_SUPPLIER_AT_4,
TEST.TYS____COMPL.SUPPLIER_RESOLVED_TIME_1, TEST.TYS____COMPL.SUPPLIER_RESOLVED_TIME_2,
TEST.TYS____COMPL.SUPPLIER_RESOLVED_TIME_3, TEST.TYS____COMPL.SUPPLIER_RESOLVED_TIME_4,
TEST.TYS____COMPL.FAILURE, TEST.TYS____COMPL.COUNTRY, TEST.TYS____COMPL.CUSTOMERS_NAME,
TEST.TYS____COMPL.CUSTOMERS_TEL_NO, TEST.TYS____COMPL.CUSTOMER_E_MAIL_ADDRESS
FROM TEST.TYS____COMPL, TEST.SH_TYS____COMPL, TEST.TYS__COMPL_CONFIGURATION
WHERE TEST.TYS____COMPL.OLD_COMPL_ID = TEST.SH_TYS____COMPL.OLD_COMPL_ID AND
TEST.TYS____COMPL.CID = TEST.TYS__COMPL_CONFIGURATION.CID AND
((TEST.SH_TYS____COMPL.CLOSED_TIME < (TO_DATE(TRUNC(next_day(SYSDATE - 7, 'MONDAY')), 'DD/MM/YY HH24:MI:SS')
- TO_DATE('01/01/1970 01:00:00', 'DD/MM/YY HH24:MI:SS')) * 24 * 3600 + 1) AND
(TEST.SH_TYS____COMPL.CLOSED_TIME > (TO_DATE(TRUNC(next_day(SYSDATE - 14, 'MONDAY')), 'DD/MM/YY HH24:MI:SS')
- TO_DATE('01/01/1970 01:00:00', 'DD/MM/YY HH24:MI:SS')) * 24 * 3600) OR
(TEST.SH_TYS____COMPL.CANCELLED_TIME < (TO_DATE(TRUNC(next_day(SYSDATE - 7, 'MONDAY')), 'DD/MM/YY HH24:MI:SS')
- TO_DATE('01/01/1970 01:00:00', 'DD/MM/YY HH24:MI:SS')) * 24 * 3600 + 1) AND
(TEST.SH_TYS____COMPL.CANCELLED_TIME > (TO_DATE(TRUNC(next_day(SYSDATE - 14, 'MONDAY')), 'DD/MM/YY HH24:MI:SS')
- TO_DATE('01/01/1970 01:00:00', 'DD/MM/YY HH24:MI:SS')) * 24 * 3600))
I'm sending herewith the explain plan from toad as below for further invetigation
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 7 K 19360
CONCATENATION
HASH JOIN 3 K 2 M 11048
TABLE ACCESS FULL ARADMIN.T60 11 K 473 K 200
NESTED LOOPS 3 K 1 M 10837
TABLE ACCESS BY INDEX ROWID ARADMIN.H35 3 K 111 K 3255
INDEX RANGE SCAN ARADMIN.STF_CANT 3 K 48
TABLE ACCESS BY INDEX ROWID ARADMIN.T35 1 M 698 M 2
INDEX UNIQUE SCAN ARADMIN.IT35 1 M 1
HASH JOIN 3 K 2 M 11048
TABLE ACCESS FULL ARADMIN.T60 11 K 473 K 200
NESTED LOOPS 3 K 1 M 10837
TABLE ACCESS BY INDEX ROWID ARADMIN.H35 3 K 111 K 3255
INDEX RANGE SCAN ARADMIN.STF_CLOT 3 K 48
TABLE ACCESS BY INDEX ROWID ARADMIN.T35 1 M 698 M 2
INDEX UNIQUE SCAN ARADMIN.IT35 1 M 1
Similar Messages
-
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with Homesharing?
I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
Has anyone some suggestions?Hi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1
Hello,
I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
Some environment characteristics:
Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
OS: RedHat Linux 2.1 Enterprise.
Oracle: Oracle Enterprise Edition 9.2.0.4
Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
Does anyone know what could be going on?
Is anybody else having this similar behavior?
We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
Please help us out, gives some tips, tricks, or guides
Thanks to all,
FrankThank you very much and sorry I couldnt write sooner. It seems that the administrator doesnt see the kswap going on so much, so I dont really know what is going on.
We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we dont really, the server would show pick right?
But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
We have, may be the most powerful combinations here. What is oracle doing?
We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
Can some one help me?
Thanks,
Frank -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance issues with data warehouse loads
We have performance issues with our data warehouse load ETL process. I have run
analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
ScottHi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance issue with HRALXSYNC report..
HI,
I'm facing performance issue with the HRALXSYNC report. As this is Standard report, Can any body suggest me how to optimize the standard report..
Thanks in advance.
Saleem Javed
Moderator message: Please Read before Posting in the Performance and Tuning Forum, also look for existing SAP notes and/or send a support message to SAP.
Edited by: Thomas Zloch on Aug 23, 2011 4:17 PMSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later -
Performance Issue with VL06O report
Hi,
We are having performance issue with VL06O report, when run with forwarding agent. It is taking about an hour with forwarding agent. The issue is with VBPA table and we found one OSS note, but it is for old versions. ours is ECC 5.0. Can anybody know the solution? If you guys need more information, please ask me.
Thanks,
SuryaSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later -
Performance Issue with BSIS(open accounting items)
Hey All,
I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
The select itself take a long time and after that as I have so much data overall execution is slow too.
The select which concerns me the most is:
SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
INTO TABLE i_bsis
FROM bsis
WHERE bukrs = '1000'
AND hkont in r_hkont
AND budat <= p_lcdate
AND augdt = 0
AND augbl = space
AND gsber = c_ZRL1
AND gjahr BETWEEN l_gjahr2 AND l_gjahr
AND ( blart = c_re "Invoice
OR blart = c_we "Goods receipt
OR blart = c_zc "Invoice Cancels
OR blart = c_kp ). "Accounting offset
I have seen other related threads, but was not that helpful.
We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
Any Input is highly appreciated.
ThanksThank you Thomas for your inputs:
You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
portion of the overall existing account numbers, then there is no really quick access possible via the
BSIS primary key.
Unfortunately R_HKONT contains all account numbers.
As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
BUDAT. No chance of narrowing down that range?
Will look into this.
What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
an index on BUKRS and GSBER might be helpful.
ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
system performance.
I assume that the four document types are not very selective, so it probably does not pay off to
investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
information. You still might want to look into it though.
I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
those threads I tried this:
SELECT bukrs belnr gjahr blart budat
FROM bkpf INTO TABLE bkpf_l
WHERE bukrs = c_pepsico
AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
AND blart IN ('RE', 'WE', 'ZC', 'KP')
AND gjahr BETWEEN l_gjahr2 AND l_gjahr
AND budat <= p_lcdate.
SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
WHERE bukrs = bkpf_l-bukrs
AND hkont IN r_hkont
AND budat = bkpf_l-budat
AND augdt = 0
AND augbl = space
AND gjahr = bkpf_l-gjahr
AND belnr = bkpf_l-belnr
AND blart = bkpf_l-blart
AND gsber = c_zrl1.
The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
improve performance.
Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
was done 7 months back. How frequently should we do this? Will it help? -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck -
Performance issue with MSEG table
Hi all,
I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
Regards ,
AmitHi,
There could be various reasons for performance issue with MSEG.
1) database statistics of tables and indexes are not upto date.
because of this wrong index is choosen during the execution.
2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
3) Optimizer bug in oracle.
4) Size of table is very huge, archive.
Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
Hope this helps
dileep -
Performance issue with a Custom view
Hi ,
I am pretty new to performance tuning and facing a performance issue with a custom view.
Execution time for view query is good but as soon as I append a where caluse to view query ,the execution time increases.
Below is the view query:
CREATE OR REPLACE XXX_INFO_VIEW AS
SELECT csb.system_id license_id,
cst.name license_number ,
csb.system_type_code license_type ,
csb.attribute3 lac , -- license authorization code
csb.attribute6 lat , -- license admin token
csb.attribute12 ols_reg, -- OLS Registration allowed flag
l.attribute4 license_biz_type ,
NVL (( SELECT 'Y' l_supp_flag
FROM csi_item_instances cii,
okc_k_lines_b a,
okc_k_items c
WHERE c.cle_id = a.id
AND a.lse_id = 9
AND c.jtot_object1_code = 'OKX_CUSTPROD'
AND c.object1_id1 = cii.instance_id||''
AND cii.instance_status_id IN (3, 510)
AND cii.system_id = csb.system_id
AND a.sts_code IN ('SIGNED', 'ACTIVE')
AND NVL (a.date_terminated, a.end_date) > SYSDATE
AND ROWNUM < 2), 'N') active_supp_flag,
hp.party_name "Customer_Name" , -- Customer Name
hca.attribute12 FGE_FLAG,
(SELECT /*+INDEX (oklt OKC_K_LINES_TL_U1) */
nvl(max((decode(name, 'eSupport','2','Enterprise','1','Standard','1','TERM RTU','0','TERM RTS','0','Notfound'))),0) covName --TERM RTU and TERM RTS added as per Vijaya's suggestion APR302013
FROM OKC_K_LINES_B oklb1,
OKC_K_LINES_TL oklt,
OKC_K_LINES_B oklb2,
OKC_K_ITEMS oki,
CSI_item_instances cii
WHERE
OKI.JTOT_OBJECT1_CODE = 'OKX_CUSTPROD'
AND oklb1.id=oklt.id
AND OKI.OBJECT1_ID1 =cii.instance_id||''
AND Oklb1.lse_id=2
AND oklb1.dnz_chr_id=oklb2.dnz_chr_id
AND oklb2.lse_id=9
AND oki.CLE_ID=oklb2.id
AND cii.system_id=csb.system_id
AND oklt.LANGUAGE=USERENV ('LANG')) COVERAGE_TYPE
FROM csi_systems_b csb ,
csi_systems_tl cst ,
hz_cust_accounts hca,
hz_parties hp,
fnd_lookup_values l
WHERE csb.system_type_code = l.lookup_code (+)
AND csb.system_id = cst.system_id
AND hca.cust_account_id =csb.customer_id
AND hca.party_id= hp.party_id
AND cst.language = USERENV ('LANG')
AND l.lookup_type (+) = 'CSI_SYSTEM_TYPE'
AND l.language (+) = USERENV ('LANG')
AND NVL (csb.end_date_active, SYSDATE+1) > SYSDATE)
I have forced an index to avoid Full table scan on OKC_K_LINES_TL and suppressed an index on CSI_item_instances.instance id to make the view query fast.
So when i do select * from XXX_INFO_VIEWit executes in a decent time,But when I try to do
select * from XXX_INFO_VIEW where active_supp_flag='Y' and coverage_type='1'
it takes lot of time.
Execution plan is same for both queries in terms of cost but with WHERE clause Number of bytes increases.
Below are the execution plans:
View query:
SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
10 COUNT STOPKEY
9 NESTED LOOPS
7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1
5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299
2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155
1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
28 SORT AGGREGATE Bytes: 169 Cardinality: 1
27 NESTED LOOPS
25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768
23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757
20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578
17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606
14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315
13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10
21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9
24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1
26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1
43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
41 NESTED LOOPS
39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196
37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196
32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196
30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17
29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17
31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196
36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887
35 HASH JOIN
33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887
34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887
38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1
42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918
Execution plan for view query with WHERE clause:
SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211
10 COUNT STOPKEY
9 NESTED LOOPS
7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1
5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299
2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155
1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
28 SORT AGGREGATE Bytes: 169 Cardinality: 1
27 NESTED LOOPS
25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768
23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757
20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578
17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606
14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315
13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10
21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9
24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1
26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1
44 VIEW VIEW APPS.WRS_LICENSE_INFO_V Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211
43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
41 NESTED LOOPS
39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196
37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196
32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196
30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17
29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17
31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196
36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887
35 HASH JOIN
33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887
34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887
38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1
42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918Hi,
You should always try using primary index fields, if not possible then secondary index fields.
Even if you cannot do anything from either of the two then try this,
Use Less distinct fields on the top.
In your case , you can use bukrs ,gjahr ,werks on the top in the where condition..then followed by less distinct values..
Even when you use secondary index if you have 4 fields in your sec index and you are using only two fields from the top then the index is useful only upto that two fields provided they are in sequence. -
Performance Issue with RF Scanners after SAP Enhancement Pack 5 Upgrade
We are on component version SAP ECC 6.0, and recently upgraded to Enhancement Pack 5. I believe we are on Net Weaver 7.10, and using RF scanners in one plant that is Warehouse Managed. Evidentially when we moved to EHP5, the Web SAP Console went away and we are left with ITS Mobile. This has created several issues and continues to be a performance barrier for the forklift drivers in the warehouse. We see there is a high use of java script, and the processors canu2019t handle this. When we login to tcode LM00, on a laptop or desktop computer, there are no performance issues. When we login to tcode LM00, with the RF scanners, the system is very slow. It might take 30 seconds to confirm one item on a WM Transfer Order.
1.) Can we revert back to Web SAP Console, after we have upgraded to EHP5?
2.) What is creating the performance issues with the RF Scanners now that we switched over to SAP ITS mobile?
Our RF Scanners are made by Intermec, but I donu2019t think that is where the solution lies. One person in our IT Operations has spent a good deal of time configuring SAPITS to get it to work, but still it isnu2019t performing.Tom,
I am sorry I did not see this earlier.
I'm currently working on a very similar project with ITS mobile and the problem is to accurately determine the root cause of the problem in the least amount of time. The tool that works is found here: http://www.connectrf.com/index.php/mcm/managed-diagnostics/
Isolating the network from the application and the device is a time consuming process unless you have a piece of software that can trace the HTTP transaction between host and device on both wired and wireless side of the network. Once that is achieved (as with Connect's tool) you can then you can begin to solve the problem.
What I found in my project is that the amount of data traffic generated by ITS mobile can be reduced drastically, which speeds the response time of the mobile devices, especially with large number of devices in distribution centers.
Let me know if I can answer more questions related to this topic.
Cheers,
Shari
Maybe you are looking for
-
CRM_COND_COM_BADI -- Limit Execution only for Pricing
Hi all I have implemented the badi CRM_COND_COM_BADI for determing some pricings and saved it to the Z field newly added in Field Catalog. This all works ! But I see this badi is getting called multiple times though I have only 1 product in the
-
syntax: machine:~user$ cd \ That SHOULD take me to \volume1\ However what happens is terminal hangs at > and does nothing and I remain in \volume1\users\homedirectory so I tired BASH. Same thing. Terminal just hangs as if it doesn't know what to do
-
Composite schema for Oracle send port - Binding Issue
Hi this is my composite schema has 2 operations : the first is a generated view that has filter and column tags and the second one is a SQLEXECUTE operation. my problem is I think with the Binding of the composite schema. this is the schema: <?xml ve
-
Is there a Data Model in BW for Financial Reporting?
Hi, I am required to integrate a non-SAP Financial application to SAP BW. I am wondering if there exists a Data Model in SAP BW that supports out of the box canned reports. This will help me target the interface design to feed the existing Data Model
-
Vmware server: revert - vmdb failure insufficient permission
quick question to people running vmware-server. I installed it from AUR. It works fine except for Revert (Snapshot works or to be more precise it gives no error). vmware gui gives me two pop-up windows with the following error messages as soon as I p