Multi view ( table & Graph ) answer takes long time
Hi,
we are using OBIEE 10g and a user has created a report that uses both a table and a graph for representing the results. When i use either of them ( on its own) the response of the system is pretty good ( about 30-40 seconds). The problem is that If i use both the response increases up to 3,5 minutes. I would like to ask if using both the table an the graph forces the engine to make the fetches twice. If so, is there a way to change this behavior, maybe through a parameter option that i am not aware of ? ( fetch data once and then create all the presentations : table, graph).
Here are the queries and their respective execution plans that are used by the optimizer, in case they are of any help. My apologies if i am posting them at the wrong forum
Thank you in advance
Theodore
1a.Simple query (using either a table or a graph)
/* Formatted on 6/12/2013 7:53:27 πμ (QP5 v5.139.911.3011) */
SELECT DISTINCT D1.c5 AS c1,
D1.c6 AS c2,
D1.c4 AS c3,
D1.c1 AS c4,
D1.c3 AS c5,
D1.c3 / NULLIF (D1.c1, 0) AS c6,
D1.c2 AS c7,
D1.c2 / NULLIF (D1.c1, 0) AS c8
FROM ( SELECT /*+ PARALLEL(T16913,8) */
SUM (T16913.PHA_QUANTITY) AS c1,
SUM (T16913.EXEC_COST) AS c2,
SUM (T16913.EXEC_VALUE) AS c3,
COUNT (DISTINCT T366.BARCODE) AS c4,
T19403.YEAR_MONTH AS c5,
T7252.PERCENTAGE AS c6
FROM EXECALENDAR_DIM T19403,
DRUG_DIM T366,
PERCENTAGE_DIM T7252,
PRESCDRUG_FACT T16913
WHERE ( T366.DWHKEY = T16913.DRU_DWHKEY
AND T7252.DWHKEY = T16913.PER_DWHKEY
AND T16913.EXECUTION_DATE = T19403.CALENDAR_DATE
AND T19403.YEAR_MONTH = '201301')
GROUP BY T7252.PERCENTAGE, T19403.YEAR_MONTH) D1
ORDER BY c1, c2
1b. Execution plan for (1a)
ID
PID
Operation
Name
Rows
Bytes
Cost
CPU Cost
IO Cost
Temp space
IN-OUT
PQ Dist
PStart
PStop
0
SELECT STATEMENT
4
252
4279
3G
4176
1
0
PX COORDINATOR
2
1
PX SEND QC (ORDER)
SYS.:TQ10006
4
252
4279
3G
4176
P->S
QC (ORDER)
3
2
SORT ORDER BY
4
252
4279
3G
4176
PCWP
4
3
PX RECEIVE
4
252
4279
3G
4176
PCWP
5
4
PX SEND RANGE
SYS.:TQ10005
4
252
4279
3G
4176
P->P
RANGE
6
5
SORT GROUP BY
4
252
4279
3G
4176
PCWP
7
6
PX RECEIVE
4
252
4279
3G
4176
PCWP
8
7
PX SEND HASH
SYS.:TQ10004
4
252
4279
3G
4176
P->P
HASH
9
8
SORT GROUP BY
4
252
4279
3G
4176
PCWP
10
9
PX RECEIVE
4
252
4279
3G
4176
PCWP
11
10
PX SEND HASH
SYS.:TQ10003
4
252
4279
3G
4176
P->P
HASH
12
11
SORT GROUP BY
4
252
4279
3G
4176
PCWP
13
12
HASH JOIN
10M
601M
4199
745M
4176
PCWP
14
13
BUFFER SORT
PCWC
15
14
PX RECEIVE
31820
466K
199
11M
199
PCWP
16
15
PX SEND BROADCAST
SYS.:TQ10000
31820
466K
199
11M
199
S->P
BROADCAST
17
16
TABLE ACCESS FULL
DRUG_DIM
31820
466K
199
11M
199
18
13
HASH JOIN
10M
457M
3995
592M
3977
PCWP
19
18
BUFFER SORT
PCWC
20
19
PX RECEIVE
5
30
2
8071
2
PCWP
21
20
PX SEND BROADCAST
SYS.:TQ10001
5
30
2
8071
2
S->P
BROADCAST
22
21
TABLE ACCESS FULL
PERCENTAGE_DIM
5
30
2
8071
2
23
18
NESTED LOOPS
10M
400M
3989
450M
3975
PCWP
24
23
BUFFER SORT
PCWC
25
24
PX RECEIVE
PCWP
26
25
PX SEND BROADCAST
SYS.:TQ10002
S->P
BROADCAST
27
26
TABLE ACCESS FULL
EXECALENDAR_DIM
30
390
29
1953824
29
28
23
PX BLOCK ITERATOR
328K
9309K
132
14M
132
PCWC
KEY
KEY
29
28
TABLE ACCESS FULL
PRESCDRUG_FACT
328K
9309K
132
14M
132
PCWP
KEY
KEY
2a. Complicated query (using both a table or a graph)
/* Formatted on 6/12/2013 7:34:07 πμ (QP5 v5.139.911.3011) */
SELECT DISTINCT D1.c4 AS c1,
D1.c5 AS c2,
D1.c6 AS c3,
D1.c7 AS c4,
D1.c8 AS c5,
D1.c9 AS c6,
D1.c10 AS c7,
D1.c11 AS c8,
D2.c12 AS c9,
D2.c13 AS c10,
D2.c14 AS c11,
D1.c1 AS c12,
D1.c2 AS c13,
D1.c3 AS c14
FROM ( SELECT MAX (D1.c4) AS c1,
MAX (D1.c3) / NULLIF (MAX (D1.c1), 0) AS c2,
MAX (D1.c2) / NULLIF (MAX (D1.c1), 0) AS c3,
D1.c10 AS c4,
D1.c5 AS c5,
D1.c9 AS c6,
D1.c6 AS c7,
D1.c8 AS c8,
D1.c8 / NULLIF (D1.c6, 0) AS c9,
D1.c7 AS c10,
D1.c7 / NULLIF (D1.c6, 0) AS c11
FROM (SELECT SUM (D1.c6) OVER (PARTITION BY D1.c5) AS c1,
SUM (D1.c7) OVER (PARTITION BY D1.c5) AS c2,
SUM (D1.c8) OVER (PARTITION BY D1.c5) AS c3,
SUM (D1.c4) OVER (PARTITION BY D1.c5) AS c4,
D1.c5 AS c5,
D1.c6 AS c6,
D1.c7 AS c7,
D1.c8 AS c8,
D1.c9 AS c9,
D1.c10 AS c10
FROM ( SELECT COUNT (
CASE D1.c12
WHEN 1 THEN D1.c11
ELSE NULL
END)
AS c4,
D1.c5 AS c5,
SUM (D1.c13) AS c6,
SUM (D1.c14) AS c7,
SUM (D1.c15) AS c8,
COUNT (DISTINCT D1.c11) AS c9,
D1.c10 AS c10
FROM (SELECT /*+ PARALLEL(T16913,8) */
T7252.
PERCENTAGE
AS c5,
T19403.YEAR_MONTH AS c10,
T366.BARCODE AS c11,
ROW_NUMBER ()
OVER (
PARTITION BY T7252.PERCENTAGE,
T366.BARCODE
ORDER BY
T7252.PERCENTAGE DESC,
T366.BARCODE DESC)
AS c12,
T16913.PHA_QUANTITY AS c13,
T16913.EXEC_COST AS c14,
T16913.EXEC_VALUE AS c15
FROM EXECALENDAR_DIM T19403,
DRUG_DIM T366,
PERCENTAGE_DIM T7252,
PRESCDRUG_FACT T16913
WHERE (T366.DWHKEY = T16913.DRU_DWHKEY
AND T7252.DWHKEY = T16913.PER_DWHKEY
AND T16913.EXECUTION_DATE =
T19403.CALENDAR_DATE
AND T19403.YEAR_MONTH = '201305')) D1
GROUP BY D1.c5, D1.c10) D1) D1,
(SELECT /*+ PARALLEL(T16913,8) */
SUM (T16913.PHA_QUANTITY)
AS c1,
SUM (T16913.EXEC_COST) AS c2,
SUM (T16913.EXEC_VALUE) AS c3,
COUNT (DISTINCT T366.BARCODE) AS c4
FROM EXECALENDAR_DIM T19403,
DRUG_DIM T366,
PRESCDRUG_FACT T16913
WHERE ( T366.DWHKEY = T16913.DRU_DWHKEY
AND T16913.EXECUTION_DATE = T19403.CALENDAR_DATE
AND T19403.YEAR_MONTH = '201305')) D2
GROUP BY D1.c7 / NULLIF (D1.c6, 0),
D1.c8 / NULLIF (D1.c6, 0),
D1.c5,
D1.c6,
D1.c7,
D1.c8,
D1.c9,
D1.c10) D1,
(SELECT MAX (D2.c4) AS c12,
MAX (D2.c3) / NULLIF (MAX (D2.c1), 0) AS c13,
MAX (D2.c2) / NULLIF (MAX (D2.c1), 0) AS c14
FROM (SELECT SUM (D1.c6) OVER (PARTITION BY D1.c5) AS c1,
SUM (D1.c7) OVER (PARTITION BY D1.c5) AS c2,
SUM (D1.c8) OVER (PARTITION BY D1.c5) AS c3,
SUM (D1.c4) OVER (PARTITION BY D1.c5) AS c4,
D1.c5 AS c5,
D1.c6 AS c6,
D1.c7 AS c7,
D1.c8 AS c8,
D1.c9 AS c9,
select count(table2_pk) from view where table2_pk=1234
returns in ~ 1 second.
The plan is
SELECT STATEMENT
NESTED LOOPS (OUTER)
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS (OUTER)
NESTED LOOPS
TABLE ACCESS (by index rowid) of table2
INDEX (UNIQUE SCAN) OF PK_TABLE2 (UNIQUE)
INLIST ITERATOR
TABLE ACCESS (BY INDEX ROWID) OF TABLE1
INDEX RANGE SCAN OF PK_TABLE1
VIEW PUSHED PREDICATE OF TABLE3
TABLE ACCESS (BY INDEX ROWID) OF TABLE2
INDEX (UNIQUE SCAN) OF PK_TABLE3
TABLE ACCESS (BY INDEX ROWID) OF TABLE4
INDEX (UNIQUE SCAN) OF PK_TABLE4
INDEX (UNIQUE SCAN) OF PK_TABLE5
VIEW PUSHED PREDICATE OF TABLE3
TABLE ACCESS (BY INDEX ROWID) OF TABLE3
INDEX RANGE SCAN OF PK_TABLE3
The plan for both the unordered and ordered queries are the same!!
Thanks
Similar Messages
-
INSERT INTO TABLE using SELECT takes long time
Hello Friends,
--- Oracle version 10.2.0.4.0
--- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
--- When i try to SELECT the query fetches the rows in 10 seconds.
--- Any clue why it is taking so much timevishalrs wrote:
Hello Friends,hello
>
>
--- Oracle version 10.2.0.4.0
alright
--- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
I don't know how a lakh is, but it sounds like a lot...
--- When i try to SELECT the query fetches the rows in 10 seconds.
how did you test this? and did you fetch the last record, or just the first couple of hundred.
--- Any clue why it is taking so much timeWithout seeing anything, it's impossible to tell the reason.
Search the forum for "When your query takes too long" -
Table count itself takes long time
Hi,
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
I have an table which contains only few records. INSERT,UPDATE(DML) operations happening more often on this table.
When i try to count records of this table itself taking long time.
Select count(*) from test;
But when i do count of table for some of the big tables takes very less time compare to this.
Guide me what are all the steps i need to measure?
Regards,
Faizmafaiz wrote:
Hi,
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
I have an table which contains only few records. INSERT,UPDATE(DML) operations happening more often on this table.
When i try to count records of this table itself taking long time.
Select count(*) from test;
But when i do count of table for some of the big tables takes very less time compare to this.
Guide me what are all the steps i need to measure?This question is not APEX-specific and is more appropriate to the {forum:id=75} forum, making use of the Re: 3. How to improve the performance of my query? / My query is running slow.. -
SELECT DISTINCT xxxx
TO_CHAR(TRUNC(MTRAN.DAT_TRANSACTION_MTRAN), 'YYYY') DAT_TRANSACTION_MTRAN,
NVL((SELECT DISTINCT SUM(MTRAN1.QTY_PRIMARY_MTRAN)
FROM MAM_MATERIAL_TRANSACTIONS MTRAN1
WHERE TO_CHAR(TRUNC(MTRAN1.DAT_TRANSACTION_MTRAN),
'YYYY') =
TO_CHAR(TRUNC(MTRAN.DAT_TRANSACTION_MTRAN),
'YYYY')
AND TO_CHAR(TRUNC(MTRAN1.DAT_TRANSACTION_MTRAN),
'MM') = '01'
AND MTRAN1.MTYP_TRANSACTION_TYPE_ID IN
(41, 42, 43, 44, 45)
AND MTRAN1.ITEM_ITEM_ID_FOR = MTRAN.ITEM_ITEM_ID_FOR),
0) QTY_FARVARDIN,
NVL((SELECT DISTINCT SUM(MTRAN1.QTY_PRIMARY_MTRAN)
FROM M
i have a view like above with 10 select to select on one table.
this view take long time to execute ,how could i decrese this time!user498843 wrote:
SELECT DISTINCT xxxx
TO_CHAR(TRUNC(MTRAN.DAT_TRANSACTION_MTRAN), 'YYYY') DAT_TRANSACTION_MTRAN,
NVL((SELECT DISTINCT SUM(MTRAN1.QTY_PRIMARY_MTRAN)
FROM MAM_MATERIAL_TRANSACTIONS MTRAN1
WHERE TO_CHAR(TRUNC(MTRAN1.DAT_TRANSACTION_MTRAN),
'YYYY') =
TO_CHAR(TRUNC(MTRAN.DAT_TRANSACTION_MTRAN),
'YYYY')
AND TO_CHAR(TRUNC(MTRAN1.DAT_TRANSACTION_MTRAN),
'MM') = '01'
AND MTRAN1.MTYP_TRANSACTION_TYPE_ID IN
(41, 42, 43, 44, 45)
AND MTRAN1.ITEM_ITEM_ID_FOR = MTRAN.ITEM_ITEM_ID_FOR),
0) QTY_FARVARDIN,
NVL((SELECT DISTINCT SUM(MTRAN1.QTY_PRIMARY_MTRAN)
FROM M
i have a view like above with 10 select to select on one table.
this view take long time to execute ,how could i decrese this time!Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting -
CV04N takes long time to process select query on DRAT table
Hello Team,
While using CV04N to display DIR's, it takes long time to process select query on DRAT table. This query includes all the key fields. Any idea as to how to analyse this?
Thanks and best regards,
Bobby
Moderator message: please read the sticky threads of this forum, there is a lot of information on what you can do.
Edited by: Thomas Zloch on Feb 24, 2012Be aware that XP takes approx 1gb of your RAM leaving you with 1gb for whatever else is running. MS Outlook is also a memory hog.
To check Virtual Memory Settings:
Control Panel -> System
System Properties -> Advanced Tab -> Performance Settings
Performance Options -> Adavanced Tab - Virtual Memory section
Virtual Memory -
what are
* Initial Size
* Maximum Size
In a presentation at one of the Hyperion conferences years ago, Mark Ostroff suggested that the initial be set to the same as Max. (Max is typically 2x physical RAM)
These changes may provide some improvement. -
Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
i executed SQL and it takes ust 1min ( total rows is 447 )
but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )
MVIEW configration :-
CREATE MATERIALIZED VIEW EVAL.EVALSEARCH_PRV_LWC
TABLESPACE EVAL_T_S_01
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
REFRESH FORCE ON DEMAND
WITH PRIMARY KEY
Not sure why so much diffrenceinfant_raj wrote:
Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
i executed SQL and it takes ust 1min ( total rows is 447 )
but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )A SELECT does a consistent read.
A MV refresh does that and also writes database data.
These are not the same thing and cannot be directly compared.
So instead of pointing at the SELECT execution time and asking why the MV refresh is not as fast, look instead WHAT the refresh is doing and HOW it is doing that.
Is the execution plan sane? What events are the top ones for the MV refresh? What are the wait states that contributes most to the processing time of the refresh?
You cannot use the SELECT statement's execution time as a direct comparison metric. The work done by the refresh is more than the work done by the SELECT. You need to determine exactly what work is done by the refresh and whether that work is done in a reasonable time, and how other sessions are impacting the refresh (it could very well be blocked by another session). -
Takes Long time for Data Loading.
Hi All,
Good Morning.. I am new to SDN.
Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
Can you please suggest how to improve the performance of dataloading on this Case.
Thanks & Regards,
Siva.Hi....
Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
Also check System log in SM21............and shortdumps in ST04........
Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
Regards,
Debjani...... -
The 0co_om_opa_6 ip in the process chains takes long time to run
Hi experts,
The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
I have checked the note 382329,
-> where the indexes 1 and 4 are active
-> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
-> OLTPSOURCE - is blank
PARAM_NAME - OBJSELSIZE
PARAM_VALUE- is blank
-> OLTPSOURCE - is blank
PARAM_NAME - NOTSSELECT
PARAM_VALUE- is blank
-> OLTPSOURCE- 0CO_OM_OPA_6
PARAM_NAME - NOBLOCKING
PARAM_VALUE- is blank.
Could you please check if any other settings needs to be done .
Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
Please suggest.The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
Thanks Hope this helps .. -
SELECT statement takes long time
Hi All,
In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
Already we have indexes for EQUNR in OBJK table.
Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
if not T_QMIH[] IS INITIAL.
SORT T_QMIH BY EQUNR.
REFRESH T_OBJK.
SELECT EQUNR OBKNR
FROM OBJK INTO TABLE T_OBJK
FOR ALL ENTRIES IN T_QMIH
WHERE OBJK~TASER = 'SER01' AND
OBJK~EQUNR = T_QMIH-EQUNR.
Thanks
AjayHi
You can use the field QMIH-QMNUM with OBJK-IHNUM
in QMIH table, EQUNR is not primary key, it will have multiple entries
so to improve the performance use one dummy internal table for QMIH and sort it on EQUNR
delete adjacent duplicates from d_qmih and use the same in for all entries
this will improve the performance.
Also use the fields in sequence of the index and primary keys also in select
if not T_QMIH[] IS INITIAL.
SORT T_QMIH BY EQUNR.
REFRESH T_OBJK.
SELECT EQUNR OBKNR
FROM OBJK INTO TABLE T_OBJK
FOR ALL ENTRIES IN T_QMIH
WHERE IHNUM = T_QMIH-QMNUM
OBJK~TASER = 'SER01' AND
OBJK~EQUNR = T_QMIH-EQUNR.
try this and let me know
regards
Shiva -
Procedure takes long time to execute...
Hi all
i wrote the proxcedure but it takes long time to execute.
The INterdata table contains 300 records.
Here is the procedure:
create or replace procedure inter_filter
is
/*v_sessionid interdata.sessionid%type;
v_clientip interdata.clientip%type;
v_userid interdata.userid%type;
v_logindate interdata%type;
v_createddate interdata%type;
v_sourceurl interdata%type;
v_destinationurl interdata%type;*/
v_sessionid filter.sessionid%type;
v_filterid filter.filterid%type;
cursor c1 is
select sessionid,clientip,browsertype,userid,logindate,createddate,sourceurl,destinationurl
from interdata;
cursor c2 is
select sessionid,filterid
from filter;
begin
open c2;
loop
fetch c2 into v_sessionid,v_filterid;
for i in c1 loop
if i.sessionid = v_sessionid then
insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
values (filterdetail_seq.nextval,v_filterid,i.sourceurl,i.destinationurl,i.createddate);
else
insert into filter (filterid,sessionid,clientip,browsertype,userid,logindate,createddate)
values (filter_seq.nextval,i.sessionid,i.clientip,i.browsertype,i.userid,i.logindate,i.createddate);
insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
values (filterdetail_seq.nextval,filter_seq.currval,i.sourceurl,i.destinationurl,i.createddate);
end if;
end loop;
end loop;
commit;
end
Please Help!
Prathameshi wrote the proxcedure but it takes long time to execute.Please define "long time". How long does it take? What are you expecting it to take?
The INterdata table contains 300 records.But how many records are there in the FILTER table? As this is the one you are driving off this is going to determine the length of time it takes to complete. Also, this solution inserts every row in the INTERDATA table for each row in the FILTER table - in other words, if the FILTER table has twenty rows to start with you are going to end up with 6000 rows in FILTERDETAIL. No wonder it takes a long time. Is that want you want?
Also of course, you are using PL/SQL cursors when you ought to be using set operations. Did you try the solution I posted in Re: Confusion in this scenario>>>>>>> on this topic?
Cheers, APC -
It takes long time to load the parameter's value when running we run report
Hi,
It takes long time to load the parameter's value when running we run report. What could cause this? How to troubleshoot the behavior of the report? Could I use Profile and what events should i select?
ThanksHi jori5,
Based on my understanding, after changing the parameter, the report render very slow, right?
In Reporting Service, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. To analyze which section take much time, we can check the table Executionlog3 in the ReportServer database. For more information, please
refer to this article:
More tips to improve performance of SSRS reports.
In your scenario, since you mention the query spends less time, the delay might happens during report processing and report rendering section. So you should check Executionlog3 to check which section costs most of time, then you can refer to this article
to optimize your report:
Troubleshooting Reports: Report Performance.
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu -
Package Compilatin takes long time
Hi Guys,
When i tried to compile package on dev server its takes long time its suppose to be compile within sec and it used but suddenlly its taking time.
please guide me how can i start to resolve this issue
ThanksYou are probably looking at a library cache pin/lock and looking to find who is blocking.
There's a decent article on this below if you have access to the relevant views.
http://orainternals.wordpress.com/2009/06/02/library-cache-lock-and-library-cache-pin-waits/ -
Report execution takes long time
Dear all,
we have a report which takes long time to exceute due to select statement.. here is the code..
SELECT vkorg vtweg spart kunnr kunn2 AS division FROM knvp
INTO CORRESPONDING FIELDS OF TABLE hier
WHERE kunn2 IN s_kunnr
AND vkorg EQ '0001'
AND parvw EQ 'ZV'.
l_parvw = 'WE'.
SORT hier.
* select all invoices within the specified invoice creation dates.
CHECK NOT hier[] IS INITIAL.
SELECT vbrk~vbeln vbrk~fkart vbrk~waerk vbrk~vkorg vbrk~vtweg vbrk~spart vbrk~knumv
vbrk~konda vbrk~bzirk vbrk~pltyp vbrk~kunag vbrp~vbeln vbrp~aubel vbrp~posnr
vbrp~fkimg vbrp~matnr vbrp~prctr vbpa~kunnr
vbrp~pstyv vbrp~uepos
vbrp~kvgr4 vbrp~ean11
INTO CORRESPONDING FIELDS OF TABLE it_bill
FROM vbrk INNER JOIN vbrp ON vbrp~vbeln = vbrk~vbeln
INNER JOIN vbpa ON vbpa~vbeln = vbrk~vbeln
FOR ALL entries IN hier
WHERE (lt_syntax)
AND vbrk~vbeln IN s_vbeln
* AND vbrk~erdat IN r_period
AND vbrk~fkdat IN r_period
AND vbrk~rfbsk EQ 'C'
AND vbrk~vkorg EQ hier-vkorg
AND vbrk~vtweg EQ hier-vtweg
AND vbrk~spart EQ hier-spart.
Can anyone say about how to reduce the execution time.?
Edited by: Thomas Zloch on Sep 22, 2010 2:46 PM - please use code tagsHi
first of all never use move corressponding.
Rather you should declare a work area for table hier.
select values into the work area and then append that workarea into the table hier.
In case of for all entries include all the primary keys in the selection and for the keys which are of no use declare constants with initial valules like:
'prmkey' is a primary field for table 'tab1' .
Constants: field1 type tab1-prmkey value initial.
and then in your where condition write.
prmkey GE field1.
I hope it is clear to you now.
Thanks
lalit Gupta -
Gif files takes long time to load..!!
Hi,
I found the gif/jpg files takes long time ( more than 5 or 6 seconds depends on size ) to render on JavaHelp viewer EditorPane.. I am using JDK 1.3 / Win 2000 / 256 MB / 1 GHz Pentium. Also found JVM uses 90% of the system resources while loading the pic files. I have noticed it as a JDK 1.3 bug from another thread.
http://forum.java.sun.com/thread.jsp?forum=42&thread=230409
I would like to know is there any other solution for this, keeping my JDK Version as it is?
Thanks
RejiI would convert your images to JPegs if possible since they are so much smaller in size and therefore load much quicker. I had exactly the same problem but I sorted it out this way. I know it's a pain and it will probably screw up your view, but it will speed up your load time ten-fold and I don't know of any other work around. sorry...
-
Statastics Gatharing take long time
Hi,
can any one suggest me to improve the time for gathering statatics.
OS version AIX 5.3 oracle 10.2.0.4 database in ASM. daily resoration dump size 110gb
we are daily performing restoration and statatics gathering in UAT server.
some time statatics gathering complete early some time it take long time.
server have 11Gb RAM and 2cpu.
i m using following following command
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'FFPREPRODBATCH',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'FFPREPRODMASTER',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'FFPREPRODMCT',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'FFPREPRODCMS',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'FFPREPRODUSER',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
please suggest me if i need to add some more poarameter in order to complete stat. early.
ThanksYou have only 2 CPUs and 110GB of data.
I suggest that you prioritise what statistics do need to be gathered every day. Identify only the key tables where statistics need to be updated.
Hemant K Chitale
UPDATE : Remember that DataPump will be copying in statistics from the source. So, essentially, all tables will have statistics. You have to decide which tables really need to have updated statistics and gather_stats on those tables only.
Edited by: Hemant K Chitale on Sep 14, 2011 2:41 PM
Maybe you are looking for
-
IDVD doesn't respond after it says "Done" at the end of Burning
Firstly, I have burn many, many, MANY DVD's before in iDVD (for school camps, special occasions, etc.) and have met this problem before, but in a different way. Those times I was burning about 20 copies, and after about 7 or 8, it would say this prob
-
Projecting slideshows and pictures on TV or other?
I have a series of pictures I'd like to project from my iPhone through a classroom hookup onto a big screen. I have all the wiring and technology figured out, but I cannot figure out how, using the "Photos" app, to project individual pictures. The ap
-
How do I recover /lib in Solaris 10
I am running a SunOS Release 5.10 Version Generic_120011-14 64-bit machine. I too (similar to an older post) did rm -rf /lib accidentally, any chance to recover or fix the issue? Already tried mounting the disk and "tar"ing and pipeline copying the f
-
Variable number of arguments in C functions
Hello. I know how to achieve creating a function that accepts a variable number of arguments. For example: #include <stdio.h> #include <stdarg.h> int add (int x, ...); int main (int argc, const char * argv[]) int result = add(3, 5, 3, 7); printf("%d"
-
Syntax error in Unicode environment
Hi all, I have below piece of code in 1 program ... do. replace ',' with space into w_mbew. if sy-subrc > 0. exit. endif. enddo. It works fine when unicode attribute set. But the moment i