Query execution slow
Hi Experts,
I have problem with query execution. It is taking more time to execution.
Query is like this :
SELECT gcc_po.segment1 bc,
gcc_po.segment2 rc,
gcc_po.segment3 dept,
gcc_po.segment4 ACCOUNT,
gcc_po.segment5 product,
gcc_po.segment6 project,
gcc_po.segment7 tbd,
SUBSTR (pv.vendor_name, 1, 50) vendor_name,
pv.vendor_id,
NVL (ph.closed_code, 'OPEN') status,
ph.cancel_flag,
ph.vendor_site_id,
ph.segment1 po_number,
ph.creation_date po_creation_date,
pv.segment1 supplier_number,
pvsa.vendor_site_code,
ph.currency_code po_curr_code,
ph.blanket_total_amount,
NVL (ph.rate, 1) po_rate,
SUM (DECODE (:p_currency,
'FUNCTIONAL', DECODE (:p_func_curr_code,
ph.currency_code, NVL
(pd.amount_billed,
0),
NVL (pd.amount_billed, 0)
* NVL (ph.rate, 1)
NVL (pd.amount_billed, 0)
)) amt_vouchered,
ph.po_header_id poheaderid,
INITCAP (ph.attribute1) po_type,
DECODE (ph.attribute8,
'ARIBA', DECODE (ph.attribute4,
NULL, ph.attribute4,
ppf.full_name
ph.attribute4
) origanator,
ph.attribute8 phv_attribute8,
UPPER (ph.attribute4) phv_attribute4
FROM po_headers ph,
po_vendors pv,
po_vendor_sites pvsa,
po_distributions pd,
gl_code_combinations gcc_po,
per_all_people_f ppf
WHERE ph.segment1 BETWEEN '001002' AND 'IND900714'
AND ph.vendor_id = pv.vendor_id(+)
AND ph.vendor_site_id = pvsa.vendor_site_id
AND ph.po_header_id = pd.po_header_id
AND gcc_po.code_combination_id = pd.code_combination_id
AND pv.vendor_id = pvsa.vendor_id
AND UPPER (ph.attribute4) = ppf.attribute2(+) -- no index on attributes
AND ph.creation_date BETWEEN ppf.effective_start_date(+) AND ppf.effective_end_date(+)
GROUP BY gcc_po.segment1,-- no index on segments
gcc_po.segment2,
gcc_po.segment3,
gcc_po.segment4,
gcc_po.segment5,
gcc_po.segment6,
gcc_po.segment7,
SUBSTR (pv.vendor_name, 1, 50),
pv.vendor_id,
NVL (ph.closed_code, 'OPEN'),
ph.cancel_flag,
ph.vendor_site_id,
ph.segment1,
ph.creation_date,
pvsa.attribute7,
pv.segment1,
pvsa.vendor_site_code,
ph.currency_code,
ph.blanket_total_amount,
NVL (ph.rate, 1),
ph.po_header_id,
INITCAP (ph.attribute1),
DECODE (ph.attribute8,
'ARIBA', DECODE (ph.attribute4,
NULL, ph.attribute4,
ppf.full_name
ph.attribute4
ph.attribute8,
ph.attribute4Here with out SUM funciton and group by function its execution is fast. if i use this Sum function and Group by function it is taking nearly 45 mins.
Explain plan for this:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 6364
HASH GROUP BY 1 272 6364
NESTED LOOPS OUTER 1 272 6363
NESTED LOOPS 1 232 6360
NESTED LOOPS 1 192 6358
NESTED LOOPS 1 171 6341
HASH JOIN 1 K 100 K 2455
TABLE ACCESS FULL PO_VENDOR_SITES_ALL 1 K 36 K 1683
TABLE ACCESS FULL PO_VENDORS 56 K 3 M 770
TABLE ACCESS BY INDEX ROWID PO_HEADERS_ALL 1 82 53
INDEX RANGE SCAN PO_HEADERS_N1 69 2
TABLE ACCESS BY INDEX ROWID PO_DISTRIBUTIONS_ALL 1 21 17
INDEX RANGE SCAN PO_DISTRIBUTIONS_N3 76 2
TABLE ACCESS BY INDEX ROWID GL_CODE_COMBINATIONS 1 40 2
INDEX UNIQUE SCAN GL_CODE_COMBINATIONS_U1 1 1
TABLE ACCESS BY INDEX ROWID PER_ALL_PEOPLE_F 1 40 3
INDEX RANGE SCAN PER_PEOPLE_F_ATT2 2 1 plz giv me solution for this.....Whihc Hints shall i use in this query.....
thanks in ADV....
I have a feeling this will lead us nowhere, but let me try for the last time.
Tuning a query is not about trying out all available index hints, because there must be one that makes the query fly. It is about diagnosing the query. See what it does and see where time is being spent. Only after you know where time is being spent, then you can effectively do something about it (if it is not tuned already).
So please read about explain plan, SQL*Trace and tkprof, and start diagnosing where your problem is.
Regards,
Rob.
Similar Messages
-
Query execution slow on Production but fine on test DB
Oracle database : 11.1.0.7.0
Hi all ,
There is a query which is taking more than one hour to execute on production but executes within 4 minutes.
I generate the statspack report on both and this is what it says:
On Production(taken at a duration of 13 min)
Instance CPU
~~~~~~~~~~~~ % Time (seconds)
Host: Total time (s): 12,980.
Host: Busy CPU time (s): 836.4
% of time Host is Busy: 6.4
Instance: Total CPU time (s): 820.0
% of Busy CPU used for Instance: 98.0
Instance: Total Database time (s): 823.3
%DB time waiting for CPU (Resource Mgr): 0.0
Virtual Memory Paging
~~~~~~~~~~~~~~~~~~~~~
KB paged out per sec: 952,086,497.1
KB paged in per sec: ##############
Instance Activity Stats DB/Inst: ABDCRS/abdcrs Snaps: 6-7
Statistic Total per Second per Trans
buffer is not pinned count 73,698,340 90,761.5 383,845.5
buffer is pinned count 2,115,542,366 2,605,347.7 ############
bytes received via SQL*Net from c 439,101 540.8 2,287.0
bytes sent via SQL*Net to client 223,265 275.0 1,162.8
calls to get snapshot scn: kcmgss 54,195 66.7 282.3
calls to kcmgas 1,316 1.6 6.9
calls to kcmgcs 129 0.2 0.7
cell physical IO interconnect byt 432,079,872 532,118.1 2,250,416.0
change write time 62 0.1 0.3
concurrency wait time 14 0.0 0.1
consistent changes 843 1.0 4.4
consistent gets 77,570,007 95,529.6 404,010.5
consistent gets - examination 40,685 50.1 211.9
consistent gets direct 0 0.0 0.0
consistent gets from cache 77,570,007 95,529.6 404,010.5
consistent gets from cache (fastp 77,523,523 95,472.3 403,768.4
cursor authentications 0 0.0 0.0
opened cursors cumulative 49,290 60.7 256.7
On test Database (taken at a interval of one and a half min)
Instance CPU
~~~~~~~~~~~~ % Time (seconds)
Host: Total time (s): 134.0
Host: Busy CPU time (s): 37.1
% of time Host is Busy: 27.7
Instance: Total CPU time (s): 26.5
% of Busy CPU used for Instance: 71.5
Instance: Total Database time (s): 100.5
%DB time waiting for CPU (Resource Mgr): 0.0
Virtual Memory Paging
~~~~~~~~~~~~~~~~~~~~~
KB paged out per sec: 6.8
KB paged in per sec: 26.1
Instance Activity Stats DB/Inst: ABDCRS/abdcrs Snaps: 2-3
Statistic Total per Second per Trans
buffer is not pinned count 799,850 11,762.5 49,990.6
buffer is pinned count 458,511 6,742.8 28,656.9
bytes received via SQL*Net from c 888,978 13,073.2 55,561.1
bytes sent via SQL*Net to client 5,980,608 87,950.1 373,788.0
calls to get snapshot scn: kcmgss 245,953 3,617.0 15,372.1
calls to kcmgas 818 12.0 51.1
concurrency wait time 2 0.0 0.1
consistent changes 7 0.1 0.4
consistent gets 1,037,292 15,254.3 64,830.8
consistent gets - examination 421,021 6,191.5 26,313.8
consistent gets direct 96,012 1,411.9 6,000.8
consistent gets from cache 941,280 13,842.4 58,830.0
consistent gets from cache (fastp 358,400 5,270.6 22,400.0
current blocks converted for CR 0 0.0 0.0
opened cursors cumulative 239,029 3,515.1 14,939.3
Now as u can see the value for " bytes sent via SQL*Net to client" on test is very high for a one min window but on prod it is very low even for a 13 min window also value for consistent gets on prod in extremely high as compared to test database and test db is doing more execution in one min than prod database in 15 min.Now these are the major differences on prod and test could this be the reason for slow execution of query on prod? and how do I fix it
Also value for opened cursors cumulative for test is high but it is low on prod.I don't have the awr report as database is standard edition Looking forward to your reply
Thanks
SauravPl use \ tags to make your post more readable - http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ
When your query takes too long:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long ...
HTH
Srini -
Query execution slow for the first time...
I am new to Oracle sql.
I have a query whose performance is very slow for the first time, but on subsequent executions its fast. When executed for the first time it is taking around 45 seconds and on subsequent executions, 600 milliseconds.
Is there a specific reason for this to happen. I am calling this query from my java code using a prepare statement.Are the differences in queries solely in the where clause? If so can you parameterize the query and use bind variables instead so the only difference from one query to the next is the values of the bind variables? Using bind variables in your queries will enable the parser to reuse the already parsed queries even when the bound values differ.
Also there may be other optimizations that can be made to either your query or the tables that it is querying against to improve your performance. To be able to improve your queries performance you need to understand how it's accessing the database.
See Rob's thread on query optimization [When your query takes too long |http://forums.oracle.com/forums/thread.jspa?threadID=501834&start=0&tstart=0] for a primer on optimizing your query. -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
Hi Friends ,
i am using 11.2.0.3.0 oracle db . We have a query which is running smoothly on Live and the same query runs slow on staging environment . The data is pulled from Live to staging using Golden Gate and not all columns are refreshed .
Can you please help me tune this query or let me know what best can be done for this query to run like Live environment .
Regards,
DBAppsHi,
This is a general type of question. please be specific. Golden Thumb rule is that don't use '*", instead use the column names. Analyze the table and take a execution plan and check for index usage .
Please give the problem statement also so that we can help you. -
Clustering of SQL query execution times
In doing some query execution experiments I have noted a curious (to me, anyhow) clustering of execution times around two distinct points. Across about 100 tests each running 1000 queries using (pseudo-)randomly generated IDs the following pattern emerges. The queries were run from Java using all combinations of pooled/non-pooled and thin/oci driver combinations:
100 *
90 *
R 80 *
u 70 *
n 60 *
s 50 *
40 * *
30 * *
20 * * * *
10 * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)Where about half of the total execution times cluster strongly about a given (short) time value with a smaller but broader clustering at a significantly slower mark, with zero intermediate values. The last point is the one I find most curious.
What I would have expected is something like this:
100
90
R 80
u 70
n 60
s 50
40 *
30 * * *
20 * * * * * *
10 * * * * * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)The variables I have tentatively discounted thus far:
-query differences (single query used)
-connection differences (using single pooled connection)
-garbage collection (collection spikes independent of query execution times)
-amount of data returned in bytes (single varchar2 returned and size is independent of execution time)
-driver differences (thin and oci compared, overall times differ but pattern of clustering remains)
-differences between Statement and PreparedStatement usage (both show same pattern)
I know this is a rather open-ended question, but does the described pattern seem faniliar or spark any thoughts?
DB-side file I/O?
Thread time-slicing variations (client or DB-side)?
FWIW, the DB is 9.2.0.3 DB and the clients are running on WinXP with Java 5.0 and 9i drivers.
Thanks and regards,
MFurther context:
Are your queries only SELECT queries ?
Yes, the same SELECT query is used for all tests. The only variable is the bind variable used to identify the primary key of the selection set (i.e. SELECT a.* from a, b, c where a.x = b.x and b.y = c.y and c.pk = ?) where all PKs and FKs are indexed.Do the queries always use the same tables, the same where clauses ?
Yes, the same tables are always invoked. The where clauses invoked are identical with the excepton of the single bind variable as described above.Do your queries always use bind variables ?
A single bind variable is used in all invocations as described above.Are your queries also running in single user mode or multi user mode (do you use SELECT FOR UPDATE ?) ?
We are not using SELECT FOR UPDATEDid something else run on the database/on the server hosting the database on the same time ?
I have not eliminated the idea, but the test has been repeated roughly 100 times over the course of a week and at different times of day with the same pattern emerging. I suppose it is not out of the question that a resource-hogging process is running consistently and constantly on the DB-side box.Thanks for the input,
M -
Have anybody idea why this query sometimes running very slow?
But only sometimes!
Thanks for any ideas on this.
SQL> SELECT a, b
2 FROM X_TAB
3 WHERE A_KL = '2ABRY'
4 AND A_JL = '2ABRY'
5 AND YEAR = 2006
6 AND ID_DB = 906858
7 ORDER BY a;
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=41)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'X_TAB' (TABLE) (Cost=1 Card=1 Bytes=41)
2 1 INDEX (RANGE SCAN) OF 'ACG_UK3' (INDEX (UNIQUE)) (Cost=1 Card=3)
Statistics
0 recursive calls
0 db block gets
13 consistent gets
0 physical reads
0 redo size
3555 bytes sent via SQL*Net to client
295 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
58 rows processedDefinition of index:
CREATE UNIQUE INDEX acg_uk3 ON x_tab
ID_DB ASC,
A_KL ASC,
YEAR ASC,
A ASC
PCTFREE 10
INITRANS 2
MAXTRANS 255
TABLESPACE xx000_w
STORAGE (
INITIAL 131072
NEXT 131072
PCTINCREASE 0
MINEXTENTS 1
MAXEXTENTS 2147483645
)Should we reconfigure this index or someone see problem somewhere else?Have anybody idea why this query sometimes running
very slow?
But only sometimes!It seems to me there is some process which cause to degrade yours query execution ,for that a wait event occurs because something is being processed where other processes waiting for its completion are dependent on the result of the event causing the wait. When one process waits for another to complete a wait event is recorded in database statistics.
For detail info
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#i18202Khurram -
SELECT SUM(A.NO_MONTH_CONSUMPTION),SUM(A.BASE_CONSUMPTION),SUM(A.CURRENT_DOC_AMT),SUM(A.CUR_TAX),SUM(B.CURRENT_DOC_AMT)
FROM VW_x A,(SELECT CURRENT_DOC_AMT,DOC_NO
FROM VW_y B
WHERE NVL(B.VOID_STATUS,0)=0 AND B.TR_TYPE_CODE='SW' AND B.BPREF_NO=:B4 AND B.SERVICE_CODE=:B3 AND B.BIZ_PART_CODE=:B2 AND B.CONS_CODE=:B1 ) B
WHERE A.BPREF_NO=:B4 AND A.SERVICE_CODE=:B3 AND A.BIZ_PART_CODE=:B2 AND A.CONS_CODE=:B1 AND A.BILL_MONTH >:B5 AND NVL(A.VOID_STATUS,0)=0 AND NVL(A.AVG_IND,0)= 2 AND A.DOC_NO=B.DOC_NO(+)
the above view "VW_x" has around 40 million records from two tables and avg_ind column has only 0 and 2 values. I created a functional index on both table something like create index on x1 nvl(avg,0)
TRACE OUT PUT
STATISTICS
15 recursive calls
0 db block gets
18 consistent gets
4 physical reads
0 redo size
357 bytes sent via SQL*Net to client
252 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
but still the query is slow...please suggest the best practise to make it fast
thanksHi sorry i was out of office for a while please check the execution plan for my query.
Below query i am calling in a procedure passing the parameters
While i execute the query separatly it works fine but the same thing when i call in procedure and the procedure has loop which goes and check around 400,000 records thats where i get the problem
select sum(a.no_month_consumption),sum(a.base_consumption),sum(a.current_doc_amt),sum(a.cur_tax),sum(b.current_doc_amt
--into vnomonths,vcons,vconsamt,vtaxamt,vsewage
from bill_View a,(select current_doc_amt,doc_no from dbcr_View b where nvl(b.void_status,0)=0 and b.tr_type_code='SWGDBG' and b.bpref_no='Q12345' and b.service_code='E' and b.biz_part_code='MHEW') b
where a.bpref_no='Q12345' and a.service_code='E' and a.biz_part_code='MHEW'
and a.bill_month >'30-aPR-2011' and nvl(a.void_status,0)=0 and decode(a.avg_ind,null,0,a.avg_ind)= 2
and a.doc_no=b.doc_no(+);
I created functionaly inde on avg_ind column (nvl(avg_ind,0))
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=77 Card=1 Bytes=93
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (OUTER) (Cost=77 Card=4 Bytes=372)
3 2 VIEW OF 'VW_IBS_BILL' (VIEW) (Cost=54 Card=3 Bytes=198
4 3 UNION-ALL
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_BILL' (T
ABLE) (Cost=8 Card=1 Bytes=50)
6 5 INDEX (RANGE SCAN) OF 'STBILL_BPREF_NO' (INDEX)
(Cost=3 Card=5)
7 4 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_BILL' (T
ABLE) (Cost=46 Card=2 Bytes=114)
8 7 INDEX (RANGE SCAN) OF 'XTBILL' (INDEX) (Cost=3 C
ard=43)
9 2 VIEW OF 'VW_IBS_DBCR' (VIEW) (Cost=22 Card=4 Bytes=108
10 9 UNION-ALL
11 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_T_DBCR' (TAB
LE) (Cost=2 Card=1 Bytes=54)
12 11 INDEX (RANGE SCAN) OF 'TDBCR_BPREFNO' (INDEX) (C
ost=1 Card=1)
13 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_DBCR' (T
ABLE) (Cost=7 Card=1 Bytes=43)
14 13 INDEX (RANGE SCAN) OF 'STDBCR_BPREFNO' (INDEX) (
Cost=3 Card=4)
15 10 TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_DBCR' (T
ABLE) (Cost=13 Card=2 Bytes=88)
16 15 INDEX (RANGE SCAN) OF 'XTDBCR' (INDEX) (Cost=3 C
ard=11)
what is Card and Cost attributes in the above output..................... ? -
Query of query - running slower on 64 bit CF than 32 bit CF
Greetings...
I am seeing behavior where pages that use query-of-query run slower on 64-bit Coldfusion 9.01 than on 32-bit Coldfusion 9.01.
My server specs are : dual processer virtual machine, 4 GIG ram, Windows 2008 Datacenter Server r2 64-bit, Coldfusion 9.01. Note that the coldfusion is literally "straight out of the box", and is using all default settings - the only thing I configured in CF is a single datasource.
The script I am using to benchmark this runs a query that returns 20,000 rows with fields id, firstname, lastname, email, city, datecreated. I then loop through all 20,000 records, and for each record, I do a query-of-query (on the same master query) to find any other record where the lastname matches that of the record I'm currently on. Note that I'm only interested in using this process for comparative benchmarking purposes, and I know that the process could be written more efficiently.
Here are my observed execution times for both 64-bit and 32-bit Coldfusion (in seconds) on the same machine.
64 bit CF 9.01: 63,49,52,52,52,48,50,49,54 (avg=52 seconds)
32 bit CF 9.01: 47,45,43,43,45,41,44,42,46 (avg=44 seconds)
It appears from this that 64-bit CF performs worse than 32-bit CF when doing query-of-query operations. Has anyone made similar observations, and is there any way I can tune the environment to improve 64 bit performance?
Thanks for any help you can provide!
By the way, here's the code that is generating these results:
<!--- Allrecs query returns 20000 rows --->
<CFQUERY NAME="ALLRECS" DATASOURCE="MyDsn">
SELECT * FROM MyTBL
</CFQUERY>
<CFLOOP QUERY="ALLRECS">
<CFQUERY NAME="SAMELASTNAME" DBTYPE="QUERY">
SELECT * FROM ALLRECS
WHERE LN=<CFQUERYPARAM VALUE="#ALLRECS.LN#" CFSQLTYPE="CF_SQL_VARCHAR">
AND ID<><CFQUERYPARAM VALUE="#AllRecs.ID#" CFSQLTYPE="CF_SQL_INTEGER">
</CFQUERY>
<CFIF SameLastName.RecordCount GT 20>
#AllRecs.LN#, #AllRecs.FN# : #SameLastName.RecordCount# other records with same lastname<BR>
</CFIF>
</CFLOOP>BoBear2681 wrote:
..follow-up: ..Thanks for the follow-up. I'll be interested to hear the progress (or otherwise, as the case may be).
As an aside. I got sick of trying to deal with Clip because it could only handle very small Clip sizes. AFAIR it was 1 second of 44.1 KHz stereo. From that point, I developed BigClip.
Unfortunately BigClip as it stands is even less able to fulfil your functional requirement than Clip, in that only one BigClip can be playing at a time. Further, it can be blocked by other sound applications (e.g. VLC Media Player, Flash in a web page..) or vice-versa. -
Issue while query execution on web analyser.
Hi,
I am getting an error message while query execution on web ie Record set too large , data retrieval restricted by configuration .I am able to run the same query in bex analyser without any issue .Any idea what could be the reason and solution for this issue .
Regards,
Neetika.Hi Neetika,
The Query is exceeding the set limits,i suggest you to Reduce the time LIne for the Query, as it may be having more number of Cells in terms of Rows and Columns.
Execute the Query for Less number of Days,if u r executing it for 1 Month then execute it for 10 Days.
Rgds
SVU123 -
I am new to SSRS and I am trying to migrate reports from 2008 to 2012. As I have so many reports to migrate, I simply got the back up of ReportServer,
ReportServerTempDB, and Encryption Key and restored them to test environment. I made necessary configuration from RS configuration tool. I am able to see the reports now when I browse //hostname/reports. But when I open any particular report I am getting some
error.
· An error has occurred during report processing.
(rsProcessingAborted)
Query execution failed for dataset 'dataSet'.
(rsErrorExecutingCommand
Semantic query execution failed. Invalid object name
'RPT. ******'. (rsSemanticQueryEngineError)
****** - I am assuming this is a custom data class.
Does anyone have insight on this? or any better way that I can migrate the reports to new server with less efforts.
I don’t have the reports solution file to deploy the reports, so I have followed backup and restore process.Hi Kishore237,
According to your description, you migrated some reports from Reporting Services (SSRS) 2008 to 2012. Now you get error when accessing the reports on SSRS 2012. Right?
In this scenario, did you modify the report data source in database after migration? You can try to open the report in Report Builder or Report designer and check the report dataset. If you can preview the report in Report builder or Report designer,
please try to redeploy the report to Report Server. If it is still not working, please try to restore the database from backup. And for migrating reports, please follow the "Content-Only Migration" in the link below:
http://msdn.microsoft.com/en-us/library/ms143724(v=sql.110).aspx
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
Unable to select the filter value after query execution
hi,
I am unable to drill down my keyfigures.
1.i have a lC and GC values out of which GC is hidden so after the query execution i would like to filter my value between GC and LC but i do get only LC value in the filter and i do not get GC value to select .
Could any one tell me how this can be done.this is very urgent.
Thx
Subharesolved on my own,so i am closing this.
Thx
Subha -
Asset query execution performance after upgrade from 4.6C to ECC 6.0+EHP4
Hi,guys
I am encounted a weird problems about asset query execution performance after upgrade to ECC 6.0.
Our client had migrated sap system from 4.6c to ECC 6.0. We test all transaction code and related stand report and query.
Everything is working normally except this asset depreciation query report. It is created based on ANLP, ANLZ, ANLA, ANLB, ANLC table; there is also some ABAP code for additional field.
This report execution costed about 6 minutes in 4.6C system; however it will take 25 minutes in ECC 6.0 with same selection parameter.
At first, I am trying to find some difference in table index ,structure between 4.6c and ECC 6.0,but there is no difference about it.
i am wondering why the other query reports is running normally but only this report running with too long time execution dump messages even though we do not make any changes for it.
your reply is very appreciated
Regards
BrianThanks for your replies.
I check these notes, unfortunately it is different our situation.
Our situation is all standard asset report and query (sq01) is running normally except this query report.
I executed se30 for this query (SQ01) at both 4.6C and ECC 6.0.
I find there is some difference in select sequence logic even though same query without any changes.
I list there for your reference.
4.6C
AQA0FI==========S2============
Open Cursor ANLP 38,702 39,329,356 = 39,329,356 34.6 AQA0FI==========S2============ DB Opens
Fetch ANLP 292,177 30,378,351 = 30,378,351 26.7 26.7 AQA0FI==========S2============ DB OpenS
Select Single ANLC 15,012 19,965,172 = 19,965,172 17.5 17.5 AQA0FI==========S2============ DB OpenS
Select Single ANLA 13,721 11,754,305 = 11,754,305 10.3 10.3 AQA0FI==========S2============ DB OpenS
Select Single ANLZ 3,753 3,259,308 = 3,259,308 2.9 2.9 AQA0FI==========S2============ DB OpenS
Select Single ANLB 3,753 3,069,119 = 3,069,119 2.7 2.7 AQA0FI==========S2============ DB OpenS
ECC 6.0
Perform FUNKTION_AUSFUEHREN 2 358,620,931 355
Perform COMMAND_QSUB 1 358,620,062 68
Call Func. RSAQ_SUBMIT_QUERY_REPORT 1 358,569,656 88
Program AQIWFI==========S2============ 2 358,558,488 1,350
Select Single ANLA 160,306 75,576,052 = 75,576,052
Open Cursor ANLP 71,136 42,096,314 = 42,096,314
Select Single ANLC 71,134 38,799,393 = 38,799,393
Select Single ANLB 61,888 26,007,721 = 26,007,721
Select Single ANLZ 61,888 24,072,111 = 24,072,111
Fetch ANLP 234,524 13,510,646 = 13,510,646
Close Cursor ANLP 71,136 2,017,654 = 2,017,654
We can see first open cursor ANLP ,fetch ANLP then select ANLC,ANLA,ANLZ,ANLB at 4.C.
But it changed to first select ANLA,and open cursor ANLP,then select ANLC,ANLB,ANLZ,at last fetch ANLP.
Probably,it is the real reason why it is running long time in ECC 6.0.
Is there any changes for query selcection logic(table join function) in ECC 6.0. -
Hi,
I'm having issues with the report created using SSAS cube.
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset 'DimUserWorkCentre'. (rsErrorExecutingCommand)
The Operator_Performance cube either does not exist or has not been processed.
I have searched through internet and tried all the solutions, but didn't worked for me.
SSRS services running as NETEWORK SERVICE user.
SSRS Execution running as a Different user, which is the login is used to logon ot that server. I have also verified this user has access to database. I'm using Shared DataSource(SSAS Source) for this report.
Can any one please help me.
Thank You,
Praveen.
PraveenHello,
Have you tried it to execute on report manager , Is your data source properly configured in Report Manager and your report is mapped with Datset correctly?
Have you executed the Dataset query MDX editor now?
What is the volume of data you are fetching in the report. Try it to execute in other than IE , I don't know the exact reason but some of our report with large volume of data are failing on IE , on the other hand these reports are running fine Google
Chrome
blog:My Blog/
Hope this will help you !!!
Sanjeewan -
Dear SCN,
I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
Regards,
PRKHi Praveen
Go through this document for performance optimization using BICS connection
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…
Maybe you are looking for
-
Hi I'm using JDeveloper 11g TP4 and in Firefox 3.X export to excel not working Regards, JavaDeVeLoper
-
Refer to a Graphic Style in a Graphic Style library
Is it possible to use Javascript to apply a Graphic Style from a save Graphic Style library to an object. Or does it have to be in the default graphic styles panel to refer to it in JS. Thanks.
-
Skype 7.3.0.101 new version
hello everybody, i just would like to know where i can see the changelog to know the improvements and changements of this version. thanks a lot and have a nice weekend.
-
100K images best HDD solution / help
Hi This has crept up on me. I have one catalog with 100K + images, it runs a little slow, particularily searches. Imports seem slow but I have no real reference point. I import CR2 5d2 files usely 300 - 700 at a time. I run the relaunch & optimize r
-
I need to know whether or not to get Pro or Express. I've heard that it doesn't include multi-camera editing capabilities. What is the disadvantage to this? And can you all name any other shortcomings it may have compared to Pro? I'm obviously doing