Cost center query tkakes a long time while executing with User's Id
Hi Experts,
We have a cost-center query which is taking a long time to display the output with User's id.
I tried running the report with the same selections and was able to get the values within seconds.
Also we have maintained aggregates on the cube.
When user tries it for a single cost-center the performance is Ok.
Any help on this wil be highly appreciated.
Thanks,
Amit
Hi,
while running the query find the trace in ST05 - before running the query in RSRT activate the trace with user id and after seeing the report in RSRT deactivate the trace.
go through the logs find the which object taking long time then create the aggregates on the cube.
while creating agggates give the fixed value.
please find the doc " how to find the SQL traces in sap bi"
Thanks,
Phani.
Similar Messages
-
Query is taking long time to execute after migrating to 10g r2
Hi
We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
Thanks for your help,
Shree
======================================================================================
SELECT cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
,sum(ra.amount_applied) amount_applied
FROM AR_CASH_RECEIPTS_ALL cr
,AR_RECEIVABLE_APPLICATIONS_ALL ra
,hz_cust_accounts cust
,AR_CASH_RECEIPT_HISTORY_ALL crh
,GL_PERIOD_STATUSES gps
,FND_APPLICATION app
WHERE cr.cash_receipt_id = ra.cash_receipt_id
AND ra.status = 'UNAPP'
AND cr.status <> 'REV'
AND cust.cust_account_id = cr.pay_from_customer
AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
AND crh.cash_receipt_id = cr.cash_receipt_id
AND app.application_id = gps.application_id
AND app.application_short_name = 'AR'
AND gps.period_name = 'May-07'
AND crh.gl_date <= gps.end_date
AND cr.receipt_number not like 'WH%'
-- AND cust.customer_number = '0000079260001'
GROUP BY cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
HAVING sum(ra.amount_applied) > 0;
=========================================================================================
Here is the explain plan in 10g r2 (10.2.0.2.0)
PLAN_TABLE_OUTPUT
Plan hash value: 2617075047
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
| 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
| 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
|* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
|* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
|* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
| 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
| 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
| 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
| 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
|* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
| 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
|* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
|* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
PLAN_TABLE_OUTPUT
|* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
|* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
Predicate Information (identified by operation id):
1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
filter("GPS"."PERIOD_NAME"='May-07')
15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
Here is the explain plan in 9i
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
tes=9423600)
1 0 FILTER
2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
286 Card=110061 Bytes=1981098)
6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
01431)
9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
ICATION' (Cost=1 Card=1 Bytes=8)
11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
U3' (UNIQUE)
12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
D_STATUSES' (Cost=3 Card=1 Bytes=19)
13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
S_U1' (UNIQUE) (Cost=2 Card=1)
14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
ost=60412 Card=8969141 Bytes=394642204)
16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
LL' (Cost=337109 Card=15613237 Bytes=265425029)Hi,
The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
Try creating some indexes on
AR_CASH_RECEIPTS_ALL
hz_cust_accounts -
Query takes a long time on EBAN table
Hi,
I am trying to execute a simple select statement on EBAN table. This query takes unexpectionally longer time to execute.
Query is :
SELECT banfn bnfpo ernam badat ebeln ebelp
INTO TABLE gt_eban
FROM eban FOR ALL ENTRIES IN gt_ekko_ekpo
WHERE
banfn IN s_banfn AND
ernam IN s_ernam
and ebeln = gt_ekko_ekpo-ebeln AND
ebelp = gt_ekko_ekpo-ebelp.
Structure of gt_ekko_ekpo
TYPES : BEGIN OF ty_ekko_ekpo,
ebeln TYPE ekko-ebeln,
ebelp TYPE ekpo-ebelp,
bukrs TYPE ekko-bukrs,
aedat TYPE ekko-aedat,
lifnr TYPE ekko-lifnr,
ekorg TYPE ekko-ekorg,
ekgrp TYPE ekko-ekgrp,
waers TYPE ekko-waers,
bedat TYPE ekko-bedat,
otb_value TYPE ekko-otb_value,
otb_res_value TYPE ekko-otb_res_value,
matnr TYPE ekpo-matnr,
werks TYPE ekpo-werks,
matkl TYPE ekpo-matkl,
elikz TYPE ekpo-elikz,
wepos TYPE ekpo-wepos,
emlif TYPE ekpo-emlif,
END OF ty_ekko_ekpo.
Structure of GT_EBAN
TYPES : BEGIN OF ty_eban,
banfn TYPE eban-banfn,
bnfpo TYPE eban-bnfpo,
ernam TYPE eban-ernam,
badat TYPE eban-badat,
ebeln TYPE eban-ebeln,
ebelp TYPE eban-ebelp,
END OF ty_eban.
Query seems to be OK to me. But still am not able to figure out the reason for this performance issue.
Please provide your inputs.
Thanks.
RichaHi Richa,
Maybe you are executing the query with S_BANFN empty. Still based on the note 191492 you should change your query on like the following
1st Suggestion:
if gt_ekko_ekpo[] is not initial.
SELECT banfn banfpo INTO TABLE gt_eket
FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
WHERE
ebeln = gt_ekko_ekpo-ebeln AND
ebelp = gt_ekko_ekpo-ebelp.
if sy-subrc = 0.
delete gt_eket where banfn not in s_banfn.
if gt_eket[] is not initial
SELECT banfn bnfpo ernam badat ebeln ebelp
INTO TABLE gt_eban
FROM eban FOR ALL ENTRIES IN gt_eket
WHERE
banfn = gt_eket-banfn
and banfpo = gt_eket-banfpo.
if sy-subrc = 0.
delete gt_eban where ernam not in s_ernam.
endif.
endif.
endif.
endif.
2nd Suggestion:
if gt_ekko_ekpo[] is not initial.
SELECT banfn banfpo INTO TABLE gt_eket
FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
WHERE
ebeln = gt_ekko_ekpo-ebeln AND
ebelp = gt_ekko_ekpo-ebelp.
if sy-subrc = 0.
delete gt_eket where banfn not in s_banfn.
if gt_eket[] is not initial
SELECT banfn bnfpo ernam badat ebeln ebelp
INTO TABLE gt_eban
FROM eban FOR ALL ENTRIES IN gt_eket
WHERE
banfn = gt_eket-banfn
and banfpo = gt_eket-banfpo
and ernam in s_ernam.
endif.
endif.
endif.
Hope this helps.
Regards,
R -
Taking long time to execute views
Hi All,
my query is taking long time to execute(i am using standard views in my query)
XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
WHERE gjh.je_batch_id = gjb.je_batch_id AND
gjh.je_header_id = gjl.je_header_id AND
gjh.je_header_id = xlawip.je_header_id AND
gjl.je_header_id = xlawip.je_header_id AND
gjl.je_line_num = xlawip.je_line_num AND
gcc.code_combination_id = gjl.code_combination_id AND
gjl.code_combination_id = xlawip.code_combination_id AND
gjb.set_of_books_id = xlawip.set_of_books_id AND
gjh.je_source = 'Inventory' AND
gjh.je_category = 'WIP' AND
gp.period_set_name = 'Accounting' AND
gp.period_name = gjl.period_name AND
gp.period_name = gjh.period_name AND
gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
to_date(enddate,'DD-MON-YY') AND
gjh.status =nvl(lstatus,gjh.status)
Could any one help me to execute it fast?
Thanks
Madhu[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...
-
Simple query is taking long time
Hi Experts,
The below query is taking long time.
[code]SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
this query is returning 1645457 records.
[code]PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 625K| 341M| 45113 (1)|
| 1 | HASH JOIN | | 625K| 341M| 45113 (1)|
| 2 | NESTED LOOPS | | 611 | 14664 | 22 (0)|
| 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 48090 | 22 (0)|
| 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 3 | 0 (0)|
| 5 | TABLE ACCESS FULL | FAX_STAGE | 2324K| 1214M| 45076 (1)|
PLAN_TABLE_OUTPUT
Note
- 'PLAN_TABLE' is old version
15 rows selected.[/code]
The distinct number of records in each table.
[code]SELECT FAX_STATUS,count(*)
FROM fax_STAGE
GROUP BY FAX_STATUS;
FAX_STATUS COUNT(*)
BROKEN 10
Broken - New 9
Completed 2324493
New 20
SELECT is_deleted,COUNT(*)
FROM FAX_SOURCE
GROUP BY IS_DELETED;
IS_DELETED COUNT(*)
N 2290
Y 78[/code]
Total number of records in each table.
[code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
[/code]
To improve the performance of this query I have created the following indexes.
[code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
Bitmap index on FSRC.IS_DELETED.
Normal Index on GBM.BU_ID and FSRC.BUID.
[/code]
But still the performance is bad for this query.
What can I do apart from this to improve the performance of this query.
Please help me .
Thanks in advance.<I have created the following indexes.
CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
After creating the following indexes performance got improved.
But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
I am sending the explain plan before creating indexes and after indexes has been created.
SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) =:B1;
--OLD without indexes
PLAN_TABLE_OUTPUT
Plan hash value: 3076973749
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 141K| 85M| 45130 (1)| 00:09:02 |
|* 1 | HASH JOIN | | 141K| 85M| 45130 (1)| 00:09:02 |
| 2 | NESTED LOOPS | | 611 | 18330 | 22 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 59540 | 22 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | FAX_STAGE | 23245 | 13M| 45106 (1)| 00:09:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
3 - filter("FSRC"."IS_DELETED"='N')
4 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
21 rows selected.
--NEW with indexes.
PLAN_TABLE_OUTPUT
Plan hash value: 665032407
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5995 | 3986K| 3117 (1)| 00:00:38 |
|* 1 | HASH JOIN | | 5995 | 3986K| 3117 (1)| 00:00:38 |
| 2 | NESTED LOOPS | | 611 | 47658 | 20 (5)| 00:00:01 |
|* 3 | VIEW | index$_join$_002 | 2290 | 165K| 20 (5)| 00:00:01 |
|* 4 | HASH JOIN | | | | | |
|* 5 | HASH JOIN | | | | | |
PLAN_TABLE_OUTPUT
| 6 | BITMAP CONVERSION TO ROWIDS| | 2290 | 165K| 1 (0)| 00:00:01 |
|* 7 | BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM | | | | |
| 8 | INDEX FAST FULL SCAN | IDX_BUID_RAM | 2290 | 165K| 8 (0)| 00:00:01 |
| 9 | INDEX FAST FULL SCAN | IDX_FAX_NUMBER_RAM | 2290 | 165K| 14 (0)| 00:00:01 |
|* 10 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | FAX_STAGE | 23245 | 13M| 3096 (1)| 00:00:38 |
|* 12 | INDEX RANGE SCAN | IDX_UPPER_FAX_STATUS_RAM | 9298 | | 2434 (1)| 00:00:30 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
3 - filter("FSRC"."IS_DELETED"='N')
4 - access(ROWID=ROWID)
5 - access(ROWID=ROWID)
7 - access("FSRC"."IS_DELETED"='N')
10 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
31 rows selected
Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
Thanks.> -
Query Prediction takes long time - After upgrade DB 9i to 10g
Hi all, Thanks for all your help.
we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
Thanks in advance
skatHi skat
Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
Best wishes
Michael -
Add "cost center" query to a start condition?
Hi there,
we got a new requirement for one of our plants.
We're on SRM 5.0 classic scenario.
Is it possible to add a "cost center" query to a specific start condition (SWB_PROCUREMENT) of a workflow?
E.g. if a user uses cost center 4711 for a shopping cart item a specific cost center responsible xyz should approve this item.
If the user uses another cost center 4712 for a second item in this shopping cart this item should be approved by another cost center responsible abc.
Is that somehow possible ?
So far I did not find a suitable expression for cost center.
Thanks in advance for your answers.
Best regards,
HenningHi Masa,
thanks for your answer. Perhaps you also have a hint for the following:
I can't really find in the metioned thread or in note 731637 what happens if a SC with several items is partially approved.
Example:
SC with 3 items:
item 1 cc 1000
item 2 cc 2000
item 3 cc 1000
Let's say item 1+3 have been approved by the approver found by badi and WS14500015. Is a PO or a purchase requisition created in backend? Or is it only created after the whole SC has been approved (i.e. also item 2).
Thanks for a hint and best regards,
Henning -
Time_out Dump on this query take too long time
hi experts,
in my report a query taking too long time
pl. provide performance tips or suggestions
select mkpf~mblnr mkpf~mjahr mkpf~usnam mkpf~vgart
mkpf~xabln mkpf~xblnr mkpf~zshift mkpf~frbnr
mkpf~bktxt mkpf~bldat mkpf~budat mkpf~cpudt
mkpf~cputm mseg~anln1 mseg~anln2 mseg~aplzl
mseg~aufnr mseg~aufpl mseg~bpmng mseg~bprme
mseg~bstme mseg~bstmg mseg~bukrs mseg~bwart
mseg~bwtar mseg~charg mseg~dmbtr mseg~ebeln
mseg~ebelp mseg~erfme mseg~erfmg mseg~exbwr
mseg~exvkw mseg~grund mseg~kdauf mseg~kdein
mseg~kdpos mseg~kostl mseg~kunnr mseg~kzbew
mseg~kzvbr mseg~kzzug mseg~lgort mseg~lifnr
mseg~matnr mseg~meins mseg~menge mseg~lsmng
mseg~nplnr mseg~ps_psp_pnr mseg~rsnum mseg~rspos
mseg~shkzg mseg~sobkz mseg~vkwrt mseg~waers
mseg~werks mseg~xauto mseg~zeile mseg~SGTXT
into table itab
from mkpf as mkpf
inner join mseg as mseg
on mkpf~MBLNR = mseg~mblnr
and mkpf~mjahr = mseg~mjahrno the original query is, i use where clouse with conditions.
select mkpf~mblnr mkpf~mjahr mkpf~usnam mkpf~vgart
mkpf~xabln mkpf~xblnr mkpf~zshift mkpf~frbnr
mkpf~bktxt mkpf~bldat mkpf~budat mkpf~cpudt
mkpf~cputm mseg~anln1 mseg~anln2 mseg~aplzl
mseg~aufnr mseg~aufpl mseg~bpmng mseg~bprme
mseg~bstme mseg~bstmg mseg~bukrs mseg~bwart
mseg~bwtar mseg~charg mseg~dmbtr mseg~ebeln
mseg~ebelp mseg~erfme mseg~erfmg mseg~exbwr
mseg~exvkw mseg~grund mseg~kdauf mseg~kdein
mseg~kdpos mseg~kostl mseg~kunnr mseg~kzbew
mseg~kzvbr mseg~kzzug mseg~lgort mseg~lifnr
mseg~matnr mseg~meins mseg~menge mseg~lsmng
mseg~nplnr mseg~ps_psp_pnr mseg~rsnum mseg~rspos
mseg~shkzg mseg~sobkz mseg~vkwrt mseg~waers
mseg~werks mseg~xauto mseg~zeile mseg~SGTXT
into table itab
from mkpf as mkpf
inner join mseg as mseg
on mkpf~MBLNR = mseg~mblnr
and mkpf~mjahr = mseg~mjahr
WHERE mkpf~budat IN budat
AND mkpf~usnam IN usnam
AND mkpf~vgart IN vgart
AND mkpf~xblnr IN xblnr
AND mkpf~zshift IN p_shift
AND mseg~bwart IN bwart
AND mseg~matnr IN matnr
AND mseg~werks IN werks
AND mseg~lgort IN lgort
AND mseg~charg IN charg
AND mseg~sobkz IN sobkz
AND mseg~lifnr IN lifnr
AND mseg~kunnr IN kunnr. -
Query takes very long time and analyze table hangs
Hi
One of the oracle query taking very long time (ie more than a day) and affecting business requirment of getting the report in time.
I tried to analyze the table with compute statistics option, however it hangs/runs forever on one of the huge table?
Please let me know how to troubleshoot this issueHi,
What's your Oracle version?
You should use DBMS_STATS package not ANALYZE..
Regards, -
Hi All,
Working in EBS Version 11.5.10.2
The below query takes a long time, Please i need some help in this issue
select ood.organization_name
,to_char(cd.transaction_date,'RRRR/MM/DD HH24:MI:SS') trx_date
,gcc.segment1||'.'||gcc.segment2||'.'||gcc.segment3||'.'||gcc.segment4||'.'||gcc.segment5||'.'||gcc.segment6||'.'||gcc.segment7 account
,cd.base_transaction_value
,decode(transaction_type_name,
'Resource transaction',resource_code,
'WIP Assy Completion', (select msi.segment1 from mtl_system_items_b msi where msi.inventory_item_id = cd.primary_item_id and msi.organization_id = cd.organization_id)
,(select msi.segment1 from mtl_system_items_b msi where msi.inventory_item_id = cd.inventory_item_id and msi.organization_id = cd.organization_id)
) item_sub_element
,cd.transaction_type_name
,cd.operation_seq_num
,cd.department_code
,cd.resource_seq_num
,cd.subinventory_code
,cd.line_type_name accounting_type
,cd.primary_uom
,cd.primary_quantity
,cd.wip_entity_name job
,cd.basis
,cd.line_id line
,(select wsg.schedule_group_name from wip_schedule_groups wsg
where wsg.schedule_group_id = wdj.schedule_group_id
and wsg.organization_id = wdj.organization_id
) schedule_group_name
,(select msib.segment1 from mtl_system_items_b msib
where msib.inventory_item_id = wdj.primary_item_id
and msib.organization_id = wdj.organization_id
) assembly
,decode(wdj.status_type,3,'Released',4,'Complete',6,'On-Hold',14,'Pending Close',15,'Failed Close',12,'Closed') job_status
,wdj.date_released
,wdj.date_completed job_completion_date
,wdj.date_closed job_closed_date
,decode(wdj.job_type,1,'Standard',3,'Non-Standard') job_type
,wdj.class_code job_class
,cd.reason_name
,cd.reference
from cst_distribution_v cd
,org_organization_definitions ood
,gl_code_combinations gcc
,wip_discrete_jobs wdj
where cd.organization_id = ood.organization_id
and cd.reference_account = gcc.code_combination_id
and cd.wip_entity_id = wdj.wip_entity_id
and cd.organization_id = wdj.organization_id
and cd.transaction_date between to_date(fdate, 'RRRR/MM/DD HH24:MI:SS') and to_date(tdate, 'RRRR/MM/DD HH24:MI:SS')
and cd.organization_id = nvl(p_org_id, cd.organization_id)
Regards
VijayThanks Pravin,
You are right,but after created the function based
index also it is going for FTS.
for example ,i created this sample table.
create index pp_idx1 on pp(substr(mobile_no,-10,4))
My DB Version :- 10.2
Optimizer_mode=FIRST_ROWS
If you can help me.
Thanks,
ChitrasenInstead of:
select * from <table_name> where substr(called_calling_no,-10,4)=9904;Try to stay with the same datatype. Don't rely in implizit type conversions.
select * from <table_name> where substr(called_calling_no,-10,4)='9904'; -
Sql Query taking very long time to complete
Hi All,
DB:oracle 9i R2
OS:sun solaris 8
Below is the Sql Query taking very long time to complete
Could any one help me out regarding this.
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id,
md1.jlpp_transaction_id,
md1.transaction_version
Any alternate query to get the same results?
kindly let me know if any one knows.
regards,
kk.
Edited by: kk001 on Apr 27, 2011 11:23 AMDear
/* Formatted on 2011/04/27 08:32 (Formatter Plus v4.8.8) */
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id
,md1.jlpp_transaction_id
,md1.transaction_versionCould you please post here :
(a) the available indexes on transaction_data_arc table
(b) the description of transaction_data_arc table
(c) and the formatted explain plan you will get after executing the query and issuing:
select * from table (dbms_xplan.display_cursor);Hope this helps
Mohamed Houri -
Analyze a Query which takes longer time in Production server with ST03 only
Hi,
I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
Please provide me with detail steps as to perform the same with ST03
ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
Write steps in detail please.
<REMOVED BY MODERATOR>
Regards,
Sameer
Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PMThen please close the thread.
Greetings,
Blag. -
Rank Function taking a long time to execute in SAP HANA
Hi All,
I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
the following is a sample of how the Query looks,
SQL 1:
select a.column1,
b.column1,
rank () over(partition by a.column1 order by sum(b.column2) asc)
from "_SYS_BIC"."Analyticview1" b
join "Table1" a
on (a.column2 = b.column3)
group by a.column1,
b.column1;
SQL 2:
select a.column1,
b.column1,
rank () over( order by min(b.column1) asc) WJXBFS1
from "_SYS_BIC"."Analytic view2" b
cross join "Table 2" a
where (a.column2 like '%a%'
and b.column1 between 100 and 200)
group by a.column1,
b.column1
when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
1.Does anyone have an any idea to make these queries to execute faster?
2. Does the latency have anything to do with the rank function or could it be size of the result set?
3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
Thank you for your help!!
-GayathriKrishna,
I tried both of them, Graphical and CE function,
It is also taking a long time to execute
Graphical view giving me the following error after 2 hr and 36 minutes
Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
SAP DBTech JDBC: [2048]: column store error: search table error: [2620] executor: plan operation failed
CE function - I aborted after 40 mins
Do you know the syntax to declare local variable to use in CE function? -
Getting Long time to execute select count(*) statement.
Hi all,
My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
Analyzed the table.
created required indexes.
yet getting the same performance issues. please help me to solve this issue
ThanksBlueDiamond wrote:
COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
Count(*) and Count(1) are have identical executions.
Re: Count(*)/Count(1)
http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245 -
Query is taking more time to execute
Hi,
Query is taking more time to execute.
But when i execute same query in other server then it is giving immediate output.
What is the reason of it.
thanks in advance.'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA
Maybe you are looking for
-
HT1338 i have a mac os x 10.5.8 what do i have to do to get itunes for iphone 5
what steps must i take to get the new itunes for my iphone 5? I have an imac os x version 10.5.8.
-
MacBook Pro Freeze when I click 'movies' in iTunes for my ipad
When I open iTunes I click on my iPad. I can browse the general iPad info, app setting, even music settings. However if I click movies to change my ipads movie setting or try to transfer a movie my entire machine freezes. Any advice? Troy
-
How can i set the number of results that can be displayed on the page
hello im just wondering how i can set the numebr of results that can be dispaleyd on the jsp, when the there are more than 5 results i want the suer to press more how is this odne could anyone send me a example code? and also how can i get results fr
-
Lightroom 5.3 Hanging on Video Exports
I've been able to succesffuly export videos in the past, but for some reason the progress bar is just hanging now on recent attempts to export video. At time it starts, then stops mid-way. After I cancel that task, and try to start it again, it jus
-
How do i get a charge on a few thing i didnt purches on itunes
i didnt order somthing on my itunes account what do i do? but it showed up but i dont want to pay for it.