Performance Issue with PAY_BALANCE_VALUES_V View in Oracle R12
Dear all ,
We have recently upgraded from 11i(11.5.10.2) to R12(12.1.3). We are facing one Issue with slow performance of the queries where PAY_BALANCE_VALUES_V is used. We have so many reports & logic in Payroll which uses this View.
In 11i this works fine, however in R12 it takes very long time. There are no configuration changes we have done from 11i to R12.
Is there any way to optimize the performance or alternate way to retrieve the Balances Data in Payroll ?
Any heads up would be highly Appreciated.
Thanks,
Razi
Hi Razi,
The balance related performance issue is written in the following note.
Note:1494344.1 UK Payslip Generation - Self Service Program Takes Much Time To Complete (Performance Issue)
This issue was fixed in HR_PF.B RUP6 or patch:14376786. Did you apply this patch? If not, I suggest you apply it.
Also, HR_PF.B RUP6 has some balance related performance issues.
If you already have applied HR_PF.B RUP6, I suggest you log a SR with SQL trace.
Thanks,
Hideki
Similar Messages
-
Performance issue with Crystal when upgrading Oracle to 11g
Dear,
I am facing performance issue in crystal report and oracle 11g as below:
In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
and I have a tomcat server to run my application in my application I refer to report folder in report server.
This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
please let me know the root cause.
Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
Please help me to solve it.
Thanks so much,
AnhHi Anh,
Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
Thank you
Don -
Help with Performance issue with a view
Hi
We developed a custom view to get the data from gl_je_lines table with source as Payables. We are bringing the data for the last year and current year till date ie., from 01-JAN-2012 to SYSDATE. This view is in a package body, which is called from a concurrent program to write the data to a outbound file.
The problem I am facing is that this view is fetching around 72 lakhs of records for the above date range and the program is running for a long time and completes abruptly with out any result. Can anyone please let me know if there is an alternative to this. I checked the view query and there seems to be not much scope to improve the performance.
Will inserting al this data into a Global Temporary Table will help? Please revert at the earliest as this solution is very urgent for our clients.
Message was edited by: 988490e8-2268-414d-b867-9d9a911c0053This is the view query:
select GCC.SEGMENT1 "EMPRESA",
GCC.SEGMENT2 "CCUSTO",
GCC.SEGMENT3 "CONTA",
GCC.SEGMENT4 "PRODUTO",
GCC.SEGMENT5 "SERVICO",
GCC.SEGMENT6 "CANAL",
GCC.SEGMENT7 "PROJECT",
GCC.SEGMENT8 "FORWARD1",
GCC.SEGMENT9 "FORWARD2",
FFVT.DESCRIPTION "CONTA_DESCR",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_LEGAL_ENTITY',
'XDMO_REPORT_USGAAP_LEGAL_COMPANY',
GCC.SEGMENT1,
GCC.SEGMENT3),
1,
80)) "LEGAL_COMPANY",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_ACCOUNT',
'XDMO_REPORT_USGAAP_FIN_ACCOUNT',
GCC.SEGMENT3,
GCC.SEGMENT3),
1,
80)) "GRA",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_BUDGET_CENTER',
'XDMO_REPORT_USGAAP_RESPONSIBILITY',
GCC.SEGMENT2,
GCC.SEGMENT3),
1,
80)) "RESP",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_PRODUCT',
'XDMO_REPORT_USGAAP_TEAM',
GCC.SEGMENT4,
GCC.SEGMENT3),
1,
80)) "TEAM",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_ACCOUNT',
'XDMO_REPORT_USGAAP_FIN_ACCOUNT',
GCC.SEGMENT3,
GCC.SEGMENT3),
164,
80)) "GRA_DESCR",
GJH.NAME "IDLANC",
GJS.USER_JE_SOURCE_NAME "ORIGEM",
GJC.USER_JE_CATEGORY_NAME "CATEGORIA",
GJL.DESCRIPTION "DESCRICAO",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_2, '') "INVOICE_ID",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_5, '') "NOTA",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_1, '') "FORNECEDOR",
GJH.DEFAULT_EFFECTIVE_DATE "DTEFET",
to_char(GJB.POSTED_DATE, 'DD-MON-YYYY HH24:MI:SS') "DTPOSTED",
GJH.CURRENCY_CONVERSION_TYPE "TPTAX",
substr(GCC.SEGMENT9, 8, 1) "TAXA",
GJH.CURRENCY_CONVERSION_DATE "DTCONV",
-- nvl(GJL.ACCOUNTED_DR,0)-nvl(GJL.ACCOUNTED_CR,0) "VALOR",
-- added as per ITT #517830
nvl(GJL.ENTERED_DR, 0) - nvl(GJL.ENTERED_CR, 0) "VALOR",
GJH.CURRENCY_CODE "MOEDA",
-- decode(gcc.segment9, '00000000', 0, '00000001', nvl(GJL.ACCOUNTED_DR,0)-nvl(GJL.ACCOUNTED_CR,0)) "VALOR_FUNCIONAL",
-- added as per ITT #517830
(nvl(GJL.ACCOUNTED_DR, 0) - nvl(GJL.ACCOUNTED_CR, 0)) "VALOR_FUNCIONAL",
GSOB.CURRENCY_CODE "FUNCIONAL",
GJH.PERIOD_NAME "PERIODO",
GJB.STATUS "STATUS",
GSOB.SHORT_NAME "LIVRO",
GJL.LAST_UPDATE_DATE "JL_LAST_UPDATE_DATE",
GJH.LAST_UPDATE_DATE "JH_LAST_UPDATE_DATE",
GJB.LAST_UPDATE_DATE "JB_LAST_UPDATE_DATE",
GJL.JE_HEADER_ID "JE_HEADER_ID",
GJL.JE_LINE_NUM "JE_LINE_NUM"
from GL.GL_JE_LINES GJL,
GL.GL_JE_HEADERS GJH,
GL.GL_JE_BATCHES GJB,
--GL.GL_SETS_OF_BOOKS GSOB, ---As GL_SETS_OF_BOOKS table dropped in R12 so replaced with GL_LEDGERS table,Commented as part of DMO R12 Upgrade-RFC#411290.
GL.GL_LEDGERS GSOB, ---Added as part of DMO R12 Upgrade-RFC#411290.
GL.GL_JE_SOURCES_TL GJS,
GL.GL_JE_CATEGORIES_TL GJC,
GL.GL_CODE_COMBINATIONS GCC,
APPLSYS.FND_FLEX_VALUES_TL FFVT,
APPLSYS.FND_FLEX_VALUES FFV,
APPLSYS.FND_FLEX_VALUE_SETS FFVS
where GJL.CODE_COMBINATION_ID = GCC.CODE_COMBINATION_ID
and GJL.JE_HEADER_ID = GJH.JE_HEADER_ID
and GJH.JE_BATCH_ID = GJB.JE_BATCH_ID
--and GJB.SET_OF_BOOKS_ID = GSOB.SET_OF_BOOKS_ID ---Changing the mappings between the tables GL_JE_HEADERS and GL_JE_BATCHES As column SET_OF_BOOKS_ID of table GL_JE_BATCHES dropped in R12,Commented as part of DMO R12 Upgrade-RFC#411290.
and GJH.LEDGER_ID = GSOB.LEDGER_ID ---Added as part of DMO R12 Upgrade-RFC#411290.
and GJH.JE_SOURCE = GJS.JE_SOURCE_NAME
and GJH.JE_CATEGORY = GJC.JE_CATEGORY_NAME
and GCC.SEGMENT3 = FFV.FLEX_VALUE
and FFV.FLEX_VALUE_ID = FFVT.FLEX_VALUE_ID
and FFV.FLEX_VALUE_SET_ID = FFVS.FLEX_VALUE_SET_ID
and FFVS.FLEX_VALUE_SET_NAME = 'XDMO_LOCAL_USGAAP_ACCOUNT'
and GSOB.SHORT_NAME in ('XBRA BRL LOCAL GAAP', 'XBRA BRL USGAAP')
and gcc.chart_of_accounts_id = gsob.chart_of_accounts_id
and gjh.actual_flag = 'A'
DB VErsion: 11.2.0.3.0
The problem I am facing is that the above query fetches huge data and I want to know if there is anyway to improve the performance of this query. You are right that view is stored in DB. I am using this view query in a cursor to fetch the records -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck -
Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1
Hello,
I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
Some environment characteristics:
Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
OS: RedHat Linux 2.1 Enterprise.
Oracle: Oracle Enterprise Edition 9.2.0.4
Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
Does anyone know what could be going on?
Is anybody else having this similar behavior?
We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
Please help us out, gives some tips, tricks, or guides
Thanks to all,
FrankThank you very much and sorry I couldnt write sooner. It seems that the administrator doesnt see the kswap going on so much, so I dont really know what is going on.
We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we dont really, the server would show pick right?
But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
We have, may be the most powerful combinations here. What is oracle doing?
We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
Can some one help me?
Thanks,
Frank -
Performance issue with a Custom view
Hi ,
I am pretty new to performance tuning and facing a performance issue with a custom view.
Execution time for view query is good but as soon as I append a where caluse to view query ,the execution time increases.
Below is the view query:
CREATE OR REPLACE XXX_INFO_VIEW AS
SELECT csb.system_id license_id,
cst.name license_number ,
csb.system_type_code license_type ,
csb.attribute3 lac , -- license authorization code
csb.attribute6 lat , -- license admin token
csb.attribute12 ols_reg, -- OLS Registration allowed flag
l.attribute4 license_biz_type ,
NVL (( SELECT 'Y' l_supp_flag
FROM csi_item_instances cii,
okc_k_lines_b a,
okc_k_items c
WHERE c.cle_id = a.id
AND a.lse_id = 9
AND c.jtot_object1_code = 'OKX_CUSTPROD'
AND c.object1_id1 = cii.instance_id||''
AND cii.instance_status_id IN (3, 510)
AND cii.system_id = csb.system_id
AND a.sts_code IN ('SIGNED', 'ACTIVE')
AND NVL (a.date_terminated, a.end_date) > SYSDATE
AND ROWNUM < 2), 'N') active_supp_flag,
hp.party_name "Customer_Name" , -- Customer Name
hca.attribute12 FGE_FLAG,
(SELECT /*+INDEX (oklt OKC_K_LINES_TL_U1) */
nvl(max((decode(name, 'eSupport','2','Enterprise','1','Standard','1','TERM RTU','0','TERM RTS','0','Notfound'))),0) covName --TERM RTU and TERM RTS added as per Vijaya's suggestion APR302013
FROM OKC_K_LINES_B oklb1,
OKC_K_LINES_TL oklt,
OKC_K_LINES_B oklb2,
OKC_K_ITEMS oki,
CSI_item_instances cii
WHERE
OKI.JTOT_OBJECT1_CODE = 'OKX_CUSTPROD'
AND oklb1.id=oklt.id
AND OKI.OBJECT1_ID1 =cii.instance_id||''
AND Oklb1.lse_id=2
AND oklb1.dnz_chr_id=oklb2.dnz_chr_id
AND oklb2.lse_id=9
AND oki.CLE_ID=oklb2.id
AND cii.system_id=csb.system_id
AND oklt.LANGUAGE=USERENV ('LANG')) COVERAGE_TYPE
FROM csi_systems_b csb ,
csi_systems_tl cst ,
hz_cust_accounts hca,
hz_parties hp,
fnd_lookup_values l
WHERE csb.system_type_code = l.lookup_code (+)
AND csb.system_id = cst.system_id
AND hca.cust_account_id =csb.customer_id
AND hca.party_id= hp.party_id
AND cst.language = USERENV ('LANG')
AND l.lookup_type (+) = 'CSI_SYSTEM_TYPE'
AND l.language (+) = USERENV ('LANG')
AND NVL (csb.end_date_active, SYSDATE+1) > SYSDATE)
I have forced an index to avoid Full table scan on OKC_K_LINES_TL and suppressed an index on CSI_item_instances.instance id to make the view query fast.
So when i do select * from XXX_INFO_VIEWit executes in a decent time,But when I try to do
select * from XXX_INFO_VIEW where active_supp_flag='Y' and coverage_type='1'
it takes lot of time.
Execution plan is same for both queries in terms of cost but with WHERE clause Number of bytes increases.
Below are the execution plans:
View query:
SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
10 COUNT STOPKEY
9 NESTED LOOPS
7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1
5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299
2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155
1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
28 SORT AGGREGATE Bytes: 169 Cardinality: 1
27 NESTED LOOPS
25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768
23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757
20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578
17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606
14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315
13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10
21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9
24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1
26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1
43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
41 NESTED LOOPS
39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196
37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196
32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196
30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17
29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17
31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196
36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887
35 HASH JOIN
33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887
34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887
38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1
42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918
Execution plan for view query with WHERE clause:
SELECT STATEMENT ALL_ROWS Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211
10 COUNT STOPKEY
9 NESTED LOOPS
7 NESTED LOOPS Cost: 1,085 Bytes: 101 Cardinality: 1
5 NESTED LOOPS Cost: 487 Bytes: 17,043 Cardinality: 299
2 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 2,325 Cardinality: 155
1 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
4 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
3 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
6 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 2 Bytes: 7 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
28 SORT AGGREGATE Bytes: 169 Cardinality: 1
27 NESTED LOOPS
25 NESTED LOOPS Cost: 16,549 Bytes: 974,792 Cardinality: 5,768
23 NESTED LOOPS Cost: 5,070 Bytes: 811,737 Cardinality: 5,757
20 NESTED LOOPS Cost: 2,180 Bytes: 56,066 Cardinality: 578
17 NESTED LOOPS Cost: 967 Bytes: 32,118 Cardinality: 606
14 TABLE ACCESS BY INDEX ROWID TABLE CSI.CSI_ITEM_INSTANCES Cost: 22 Bytes: 3,465 Cardinality: 315
13 INDEX RANGE SCAN INDEX CSI.CSI_ITEM_INSTANCES_N07 Cost: 3 Cardinality: 315
16 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_ITEMS Cost: 3 Bytes: 84 Cardinality: 2
15 INDEX RANGE SCAN INDEX OKC.OKC_K_ITEMS_N2 Cost: 2 Cardinality: 2
19 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 2 Bytes: 44 Cardinality: 1
18 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_B_U1 Cost: 1 Cardinality: 1
22 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_B Cost: 5 Bytes: 440 Cardinality: 10
21 INDEX RANGE SCAN INDEX OKC.OKC_K_LINES_B_N2 Cost: 2 Cardinality: 9
24 INDEX UNIQUE SCAN INDEX (UNIQUE) OKC.OKC_K_LINES_TL_U1 Cost: 1 Cardinality: 1
26 TABLE ACCESS BY INDEX ROWID TABLE OKC.OKC_K_LINES_TL Cost: 2 Bytes: 28 Cardinality: 1
44 VIEW VIEW APPS.WRS_LICENSE_INFO_V Cost: 7,212 Bytes: 2,462,837 Cardinality: 3,211
43 HASH JOIN Cost: 7,212 Bytes: 536,237 Cardinality: 3,211
41 NESTED LOOPS
39 NESTED LOOPS Cost: 7,070 Bytes: 485,792 Cardinality: 3,196
37 HASH JOIN Cost: 676 Bytes: 341,972 Cardinality: 3,196
32 HASH JOIN RIGHT OUTER Cost: 488 Bytes: 310,012 Cardinality: 3,196
30 TABLE ACCESS BY INDEX ROWID TABLE APPLSYS.FND_LOOKUP_VALUES Cost: 7 Bytes: 544 Cardinality: 17
29 INDEX RANGE SCAN INDEX (UNIQUE) APPLSYS.FND_LOOKUP_VALUES_U1 Cost: 3 Cardinality: 17
31 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_B Cost: 481 Bytes: 207,740 Cardinality: 3,196
36 VIEW VIEW AR.index$_join$_013 Cost: 187 Bytes: 408,870 Cardinality: 40,887
35 HASH JOIN
33 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 112 Bytes: 408,870 Cardinality: 40,887
34 INDEX FAST FULL SCAN INDEX AR.HZ_CUST_ACCOUNTS_N2 Cost: 122 Bytes: 408,870 Cardinality: 40,887
38 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 2 Bytes: 45 Cardinality: 1
42 TABLE ACCESS FULL TABLE CSI.CSI_SYSTEMS_TL Cost: 142 Bytes: 958,770 Cardinality: 63,918Hi,
You should always try using primary index fields, if not possible then secondary index fields.
Even if you cannot do anything from either of the two then try this,
Use Less distinct fields on the top.
In your case , you can use bukrs ,gjahr ,werks on the top in the where condition..then followed by less distinct values..
Even when you use secondary index if you have 4 fields in your sec index and you are using only two fields from the top then the index is useful only upto that two fields provided they are in sequence. -
Performance issue with Jdeveloper
Hi Guys,
I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
Re: IDE has become extremely slow?
But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
Any advises are welcome!
Thanks,
StevenHi Guys,
I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
Here is the content of the test file:
<?xml version='1.0' encoding='windows-1252'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
xmlns:h="http://java.sun.com/jsf/html"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:af="http://xmlns.oracle.com/adf/faces"
xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
<jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
doctype-system="http://www.w3.org/TR/html4/loose.dtd"
doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
<jsp:directive.page contentType="text/html;charset=windows-1252"/>
<f:view>
<afh:html binding="#{backing_streettypedetail.html1}" id="html1">
<afh:head title="streettypedetail"
binding="#{backing_streettypedetail.head1}" id="head1">
<meta http-equiv="Content-Type"
content="text/html; charset=windows-1252"/>
</afh:head>
<afh:body binding="#{backing_streettypedetail.body1}" id="body1">
<af:messages binding="#{backing_streettypedetail.messages1}"
id="messages1"/>
<h:form binding="#{backing_streettypedetail.form1}" id="form1">
<af:panelForm binding="#{backing_streettypedetail.panelForm1}"
id="panelForm1">
<af:inputText value="#{bindings.streetTypeID.inputValue}"
label="#{bindings.streetTypeID.label}"
required="#{bindings.streetTypeID.mandatory}"
columns="#{bindings.streetTypeID.displayWidth}"
binding="#{backing_streettypedetail.inputText1}"
id="inputText1">
<af:validator binding="#{bindings.streetTypeID.validator}"/>
</af:inputText>
<af:inputText value="#{bindings.description.inputValue}"
label="#{bindings.description.label}"
required="#{bindings.description.mandatory}"
columns="#{bindings.description.displayWidth}"
binding="#{backing_streettypedetail.inputText2}"
id="inputText2">
<af:validator binding="#{bindings.description.validator}"/>
</af:inputText>
<af:inputText value="#{bindings.abbr.inputValue}"
label="#{bindings.abbr.label}"
required="#{bindings.abbr.mandatory}"
columns="#{bindings.abbr.displayWidth}"
binding="#{backing_streettypedetail.inputText3}"
id="inputText3">
<af:validator binding="#{bindings.abbr.validator}"/>
</af:inputText>
<f:facet name="footer">
<h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
id="panelGroup1">
<af:commandButton text="Save"
binding="#{backing_streettypedetail.saveButton}"
id="saveButton"
actionListener="#{bindings.mergeEntity.execute}"
action="#{userState.retrieveReturnNavigationRule}"
disabled="#{!bindings.mergeEntity.enabled}"
partialSubmit="false">
<af:setActionListener from="#{true}"
to="#{userState.refresh}"/>
</af:commandButton>
<af:commandButton text="Cancel"
binding="#{backing_streettypedetail.cancelButton}"
action="#{userState.retrieveReturnNavigationRule}"
id="cancelButton">
<af:setActionListener from="#{false}"
to="#{userState.refresh}"/>
</af:commandButton>
</h:panelGroup>
</f:facet>
</af:panelForm>
</h:form>
</afh:body>
</afh:html>
</f:view>
<!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
</jsp:root>
Can anybody take a look at the file and let me know what's wrong with it?
Thanks in advance.
Steven -
Performance Issue With Displaying Candidate Details for iRecruitment
We are in EBS R12.1.3 when we trying to display the Candidate Details page in iRecruitment iRecruitment >vacancies>applicants> click on applicant> the page spins for a quite whille to show the results and some times we see the 500 errors..
We are in R12.1.3 and also applied the Patch.10427777:R12.IRC.B patch.. is there any tunnign steps for the iRecruitment page=/oracle/apps/irc/candidateSearch/webui/CmAplSrchPGYou have already applied the patch mention in note: Performance Issue With Displaying Candidate Details Page in 12.1.3 (Doc ID 1293164.1)
check this note also Performance Issue when Clicking on Candidate Name (Doc ID 1575164.1)
thanks -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Performance Issues with Folio format in Ipad.
Hello everyone! My FIRST post here!!!
I work with educational games and I'm facing performance issues with games that I've made in HTML5 to play in Ipad. I tried to import them to the format folio in DPS (Adobe Digital Publishing Suite). However when I import the HTML game into Indesign and try to preview it in Adobe content viewer, the game doesn't open or works without perform properly (there is a lag that doesn't let you play the game with a little of fun).
The games that I've created have a memory use max of 35mb and weighs 30mb max.
Does anyone know what's happen and what I can do to fix that performance issue?
Thanks a lot!Moved to DPS
-
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Performance issues with data warehouse loads
We have performance issues with our data warehouse load ETL process. I have run
analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
ScottHi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim
Maybe you are looking for
-
How do I create a new user account?
I have a new iPod with a different iTunes account than my other devices. How do I sync the iPod on my iMac with the songs/movies from its own iTunes account? Do I need to create a new user profle on the computer for that?
-
Cannot convert PDF to WORD document
Why doesn't my PDF file convert? The message states that there is an error.
-
My email address has come online as a Skype accoun...
I just declined a couple of contact requests from people I have never heard of, and stratight afterwards I received a notification that '(my email address) is online'. My Skype name is NOT the same as my email address and I have not set up a Skype ac
-
Hi All There , I am calling ALV report in my bapi but it is not gettin called it is not giving screen of parameter selection derectly selectin all parameter coding is as below points assured. Data: MTAB_REPORT_HTML type standard table of W3HTML WITH
-
I have an iPhone 5s & Bose Bluetooth headset. Before upgrading to iOS 8 & 8.0.2, I could answer a call on my headset by turning it on and the call would be routed to the headset. Now, in iOS 8.0.2, when I turn my headset on to answer, the call is ans