Stored Procedure Performance Issue in Sharepoint Report
We have a report stored procedure that runs in about 1 minute in production; however, when executed from the Sharepoint it populates, it runs for over 20 minutes, with exactly the same parameters. We've done everything to avoid parameter sniffing problems:
used WITH RECOMPILE on the procedure declaration; added OPTION OPTIMIZE FOR UNKNOWN on the parameters within queries; even created local variables and assigned the parameter values to them and used them throughout the procedure. None of these has had any effect.
I know that the 1-base minute run time is a big red flag, and we're working on pre-aggregating this data into our BI platform, but that's months down the liine and this is needed for month-end close every month.
Other relevant information:
Sharepoint 2013 on SQL Server 2012
Data Source is a SQL Server 2008 R2 database
Report definition created in Visual Studio 2008
I'm trying to figure out why it's taking so long. When I query on ExecutionLog3 I see the following (converted to seconds):
Total Seconds
Retrieval Seconds
Processing Seconds
Rendering Seconds
ByteCount
RowCount
653.966
653.933
0.018
0.015
15037
1
Yet running in SSMS, it completes (consistently) in 62 seconds.
Similar Messages
-
Can someone help me diagnose a strange stored procedure performance issue please?
I have a stored procedure (posted below) that returns message recommendations based upon the Yammer Networks you have selected. If I choose one network this query takes less than one second. If I choose another this query takes 9 - 12 seconds.
/****** Object: StoredProcedure [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet] Script Date: 2/18/2015 3:10:35 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet]
-- Parameters
@UserID int,
@SourceMessageID int = 0
AS
BEGIN
-- variable for @HomeNeworkUserID
Declare @HomeNeworkUserID int
-- Set the HomeNetworkID
Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- Begin Select Statement
Select Top 40 [CreatedDate],[FileDownloadUrl],[HasLinkOrAttachment],[ImagePreviewUrl],[LikesCount],[LinkFileName],[LinkType],[MessageID],[MessageSource],[MessageText],[MessageWebUrl],[NetworkID],[NetworkName],[PosterEmailAddress],[PosterFirstName],[PosterImageUrl],[PosterName],[PosterUserName],[PosterWebUrl],[RepliesCount],[Score],[SmallIconUrl],[Subjects],[SubjectsCount],[UserID]
-- From View
From [MessageView]
-- Do Not Return Any Messages That Have Been Recommended To This User Already
Where [MessageID] Not In (Select MessageID From MessageRecommendationHistory Where UserID = @UserID)
-- Do Not Return Any Messages Created By This User
And [UserID] != @UserID
-- Do Not Return The MessageID
And [MessageID] != @SourceMessageID
-- Only return messages for the Networks the user has selected
And [NetworkID] In (Select NetworkID From NetworkUser Where [HomeNetworkUserID] = @HomeNeworkUserID And [AllowRecommendations] = 1)
-- Order By [MessageScore] and [MessageCreatedDate] in reverse order
Order By [Score] desc, [CreatedDate] desc
ENDThe Actual Execution Plan Shows up the same; there are more messages on the Network that is slow, 2800 versus 1,500 but the difference is ten times longer on the slow network.Is the fact I am doing a Top 40 what makes it slow? My first guess was to take the Order By Off and that didn't seem to make any difference.The execution plan is below, it takes 62% of the query to look up theIX_Message.Score which is the clustered index, so I thought this would be fast. Also the Clustered Index Seek for the User.UserID take 26%which seems high for what it is doing.
I have indexes on every field that is queried on so I am kind of at a loss as to where to go next.
It just seems strange because it is the same view being queried in both cases.
I tried to run the SQL Server Tuning Wizard but it doesn't run on Azure SQL, and my problem doesn't occur on the data in my local database.
Thanks for any guidance, I know a lot of the slowness is due to the lower tier Azure SQL we are using, many of the performance issues weren't noticed when were on the full SQL Server, but the other networks work extremely fast so it has to be something to
with having more rows.
In case you need the SQL for the View that I am querying it is:
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[MessageView]
AS
SELECT M.UserID, M.MessageID, M.NetworkID, N.Name AS NetworkName, M.Subjects, M.SubjectsCount, M.RepliesCount, M.LikesCount, M.CreatedDate, M.MessageText, M.HasLinkOrAttachment, M.Score, M.WebUrl AS MessageWebUrl, U.UserName AS PosterUserName,
U.Name AS PosterName, U.FirstName AS PosterFirstName, U.ImageUrl AS PosterImageUrl, U.EmailAddress AS PosterEmailAddress, U.WebUrl AS PosterWebUrl, M.MessageSource, M.ImagePreviewUrl, M.LinkFileName, M.FileDownloadUrl, M.LinkType, M.SmallIconUrl
FROM dbo.Message AS M INNER JOIN
dbo.Network AS N ON M.NetworkID = N.NetworkID INNER JOIN
dbo.[User] AS U ON M.UserID = U.UserID
GO
The Network Table has an Index on Network ID, but it non clustered but I don't think that is the culprit.
CorbyI marked your response as answer because you gave me information I didn't have about the sort. I ended up rewriting the query to be a join instead of the In's and it improved dramatically, about one second on a very minimal Azure SQL database, and before
it was 12 seconds on one network. We didn't notice the problem at all before we moved to Azure SQL, it was about one - three seconds at most.
Here is the updated way that was much more efficient:
CREATE PROCEDURE [dbo].[Procedure Name]
-- Parameters
@UserID int,
@SourceMessageID int = 0
AS
BEGIN
-- variable for @HomeNeworkUserID
Declare @HomeNeworkUserID int
-- Set the HomeNetworkID
Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
;With cteMessages As
-- Begin Select Statement
Select (Fields List)
-- Join to Network Table
From MessageView mv Inner Join NetworkUser nu on MV.NetworkID = nu.NetworKID -- Only Return Networks This User Has Selected
Where nu.HomeNetworkUserID = @HomeNeworkUserID And AllowRecommendations = 1
-- Do Not Return Any Messages Created By This User
And mv.[UserID] != @UserID
-- Do Not Return The MessageID
And mv.[MessageID] != @SourceMessageID
), cteHistoryForThisUser As
Select MessageID From MessageRecommendationHistory Where UserID = @UserID
-- Begin Select Statement
Select Top 40 (Fields List)
-- Join to Network Table
From cteMessages m Left Outer Join cteHistoryForThisUser h on m.MessageID = h.MessageID
-- Do Not Return Any Items Where User Has Already been shown this Message
Where h.MessageID Is Null
-- An Order By Is Needed To Get The Best Content First
Order By Score Desc
END
GO
The Left Outer Join to test for null was the biggest improvement, but it also helped to join to the NetworkUser table instead of do the In sub query. -
Stored procedure Performance issue in SQLserver 2005
Hi All,
i am inserting the data to Database by using of Stored procedure in target DB.
My source structure and target structures are looking below
I have the source structure having lot of rows and look like my structure is below:
<?xml version="1.0" encoding="utf-8" ?>
<ns0:POCA0013_KANLOG_REQUEST_MT_response xmlns:ns0="urn:com:POCA0013:sample">
<SCMDB_response>
- <row>
<PROJK>O-USA</PROJK>
<KOLLO>123</KOLLO>
</row>
- <row>
<PROJK>O-Denmark</PROJK>
<KOLLO>256</KOLLO>
</row>
n number of rows
</SCMDB_KANLOGVIEW_response>
</ns0:POCA0013_KANLOG_REQUEST_MT_response>
and after mapping my target structure is coming to like this.
<?xml version="1.0" encoding="UTF-8" ?>
<ns0:POCA0013_DB_MT xmlns:ns0="urn:pg-com POCA0013:sample">
<StatmentName>
<XI_SP_DATA action="EXECUTE">
<PROJEK isInput="TRUE" type="CHAR">O-USA</PROJEK>
<KOLLO isInput="TRUE" type="CHAR" >123</KOLLO>
</XI_SP_DATA>
</StatmentName>
<StatmentName>
<XI_SP_DATA action="EXECUTE">
<PROJEK isInput="TRUE" type="CHAR">O-Denmark</PROJEK>
<KOLLO isInput="TRUE" type="CHAR" />256</KOLLO>
</XI_SP_DATA>
</StatmentName>
N number of times
</ns0:POCA0013_DB_MT>
this is working perfectly to insert the records into the database by using stored procedure. each record it call the stored procedure for insert the records, for example we had 100 records and it call 100 times stored procedure.
But in case of huge data, for example 10000 records, it call the 10000 times to stored procedure.in that case we had a problem for database side.
we have one reason to use the stored procedure here, because once insert the data into table, if successful log table is created with successful status , if not log table is created with error status. for that purpose i am using stored procedure here.
Our customer wants to call the stored procedure for one time for all records.How i can manage this situation.
Can you give me your valuble ideas about this problem.
Thank you very much.,
Sateesh
Edited by: sateesh kumar .N on Apr 23, 2010 6:53 AM
Edited by: sateesh kumar .N on Apr 23, 2010 6:54 AM
Edited by: sateesh kumar .N on Apr 23, 2010 7:54 AMHi Sateesh,
how about a different approach.
Add 2 more tables to your solution. The first table is used as a staging table, where PI inserts all the data without making any checks, whatsoever. The second table is used as a control table. If the insertion is finished, a log entry is inserted into this second table, containing the information about success or failure or how many rows had been inserted. Put an insert trigger on this table, which in term starts a stored procedure. This stored procedure can read all the data from the staging table and put it into the desired target tables. Additionally you can perform plausiblitiy checks inside this SP.
Okay I know, this is a complete new solution in comparison to what you did before. But in my experience, this will be much more performant than 10000 calls to one stored procedure who only does inserts as you described.
Regards
Sven -
Bulk Insert Through Stored Procedure performance issue
Hello,
i am new to oracle. i am writing a stored procedure through which i want to insert 1 billion record in a table. but it takes days to insert it . please tell me how can i improve performance of my stored procedure. because same stored procedure when i convert it into sql server in take 24 - 30 min to insert 1 billion record.
Code of my stored procedure are as follows :
create or replace PROCEDURE bspGenerateHSCode(
mLoc_id IN INT,
HSCodeStart IN VARCHAR2,
HSCodeEnd IN VARCHAR2,
mRqstId IN INT,
total_count IN INT,
Status OUT INT)
AS
ExitFlag INT;
row_count INT;
mBatchStart NUMBER;
mBatchEnd NUMBER;
mStartSqnc NUMBER;
mEndSqnc NUMBER;
mHSCode VARCHAR2(500);
HSStartStr VARCHAR2(500);
BEGIN
SELECT COUNT(*) INTO row_count FROM goap_eal_allocation
WHERE hs_code_start = HSCodeStart
AND hs_code_end = HSCodeEnd
AND loc_id = mLoc_id
AND processed = 0;
IF row_count > 0 THEN
SELECT CAST ( REVERSE(substr(REVERSE(HSCodeStart), 1, instr(REVERSE(HSCodeStart), ',') -1)) AS NUMBER) INTO mStartSqnc FROM DUAL;
SELECT CAST ( REVERSE(substr(REVERSE(HSCodeEnd), 1, instr(REVERSE(HSCodeEnd), ',') -1)) AS NUMBER) INTO mEndSqnc FROM DUAL;
SELECT CAST( REVERSE(substr( REVERSE(HSCodeStart), instr(REVERSE(HSCodeStart), ','))) AS VARCHAR2(500) ) INTO HSStartStr FROM DUAL;
mBatchStart := mStartSqnc;
DBMS_OUTPUT.PUT_LINE('start batch ' || mBatchStart);
LOOP
mBatchEnd := mBatchStart + 5000;
IF mBatchEnd > mEndSqnc THEN
mBatchEnd := mEndSqnc + 1;
END IF;
DBMS_OUTPUT.PUT_LINE('End batch ' || mBatchEnd);
LOOP
mHSCode := HSStartStr || mBatchStart;
mBatchStart := mBatchStart + 1;
INSERT INTO goap_eal_register(id, hs_code, loc_id, status_id, synced)
SELECT CASE WHEN MAX(id) > 0 THEN (MAX(id) + 1) ELSE 1 END AS id ,
mHSCode, mLoc_id, 6, 1 FROM goap_eal_register;
EXIT WHEN mBatchStart = mBatchEnd;
END LOOP;
COMMIT;
EXIT WHEN mBatchStart = mEndSqnc +1;
END LOOP;
UPDATE goap_eal_allocation SET processed = 1
WHERE hs_code_start = HSCodeStart
AND hs_code_end = HSCodeEnd
AND loc_id = mLoc_id;
COMMIT;
Status := 1;
ELSE
Status := 0;
END IF;
END;
ThanksPlease edit your post and add \ on the line before and the line after the code to preserve formattingsee how this looks?
Also, when you basically just want to RETURN without doing any work then your first test should just RETURN if you don't want to do any work.
Instead of what your code does:IF row_count > 0 THEN
. . . a whole lot of code that is hard to read or understand
COMMIT;
Status := 1;
ELSE
Status := 0;
END IF;
Test the condition to determine when you do NOT want proceed and just return.IF row_count = 0 THEN
Status := 0;
RETURN;
END IF;
-- now NONE of the following code needs to be indented - you won't get here unless you really want to execute it.
. . . break the code into separate steps and add a one line comment before each step that says what that step does.
COMMIT;
Status := 1; -
Performance issue in Portal Reports
Hi
We are experiencing a serious performance issue, in a report, and need a urgent fix on this issue.
The report is a Reports From SQL Query report, I need to find a way to dynamically make/create the where clause otherwise I have to make the statement in a way the exclude the use of indexes.
Full-table-scan is not a valid option here; the number or records is simply too high (several millions its a datawarehouse solution). In the Developer packaged, we can make the where clause dynamically, this basic yet extremely important feature, is essential to all database application.
We need to know how to do it, and if this functionality is not natively supported, then this should be one of the priority one functionalities to implement in future releases.
However, what do I do for now?
Thank in advanceI have found a temporary workaround, by editing the where clause in the stored procedure manually. However this fix have to be done every time a change have been committed in the wizard, so it is still not a solution to go for indefinitely, but its ok for now.
-
Performance Issue with VL06O report
Hi,
We are having performance issue with VL06O report, when run with forwarding agent. It is taking about an hour with forwarding agent. The issue is with VBPA table and we found one OSS note, but it is for old versions. ours is ECC 5.0. Can anybody know the solution? If you guys need more information, please ask me.
Thanks,
SuryaSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later -
Calling stored procedure from a column in report
Hi All,
I am new to APEX. I did a search on this forum but couldnt get an exact answer.
I know how to call or execute a stored procedure from a button. like create a PL/SQL anonymous bloack and asssociate it with the button.
my issue is a bit different. I have a report with 11 columns. the last column just says 'delete' for all the rows(i did this in my report query like SELECT x,y,z,...., 'delete' AS delete). when i click this cell for any row(this column can be a hyperlink or any clickable element), i want to call a stored procedure that takes parameters as the values in column 2 and column 4 for that particular row.
for example, my report looks somethnig like below,
FILE_ID MEMBER_CD ACTION
112 ABCD delete
113 WXYYZ delete
114 PQRS delete
i want to click the 'delete' in first row that should call a stored procedure and pass 112 and ABCD as parameters to the procedure.
i have the procedure as a process now. sturggling to bind this column to procedure but no success :(
Hope am clear. Please any help is appreciated.
ThanksThanks man! that was a great help. looks like i am almost there. i created those items t obe hidden.
now i am passing three parameters to the procedure. my url for that column value looks like this,
javascript:P65_PARTITION_ID=#PARTITION_ID#;P65_DBC=#DBC#;P65_FILE_ID=#FILE_ID#;doSubmit('Sku_Save');
the #DBC# parameter is a name of the person that has spaces(firstname lastname). i am getting a javascript error saying,
Line: 1
Char: 37
Error:Expected ';'
i see that char 37 is the space after firstname.
any idea how i should get rid of this error.
Also, as you have been very helpful, a question further beyond :). the above procedure will return a OUT varchar parameter. i guess i have to create another item for that. how do i read it and display just below my report as text.
Thanks Again! -
I am using a Data Flow and an OLE DB Source to read my staged 3rd party external data. I need to do various Lookups to try and determine if I can find the external person in our database...by SSN...By Name and DOB...etc...
Now I need to do some more data verification based on the Lookup that is successful. Can I do those data edits against our SQL server application database by utilizing an OLE DB Command? Using a Stored Procedure or can I sue straight SQL to perform my edit
against every staging row by using a parameter driven query? I'm thinking a Stored Procedure is the way to go here since I have multiple edits against the database. Can I pass back the result of those edits via a variable and then continue my SSIS Data Flow
by analyzing the result of my Stored Procedure? And how would I do that.
I am new to the SSIS game here so please be kind and as explicit as possible. If you know of any good web sites that walk through how to perform SQL server database edits against external data in SSIS or even a YouTube, please let me know.
Thanks!Thanks for that...but can I do multiple edits in my Stored Procedure Vaibhav and pass back something that I can then utilize in my SSIS? For example...
One and Only one Member Span...so I'd be doing a SELECT COUNT(*) based on my match criteria or handle the count accordingly in my Stored Procedure and passing something back via the OLE DB Command and handling it appropriately in SSIS
Are there "Diabetes" claims...again probably by analyzing a SELECT COUNT(*)
Am I expecting too much from the SSIS...should I be doing all of this in a Stored Procedure? I was hoping to use the SSIS GUI for everything but maybe that's just not possible. Rather use the Stored Procedure to analyze my stged data, edit accordingly, do
data stores accordingly...especially the data anomalies...and then use the SSIS to control navigation
Your thoughts........
Could you maybe clarify the difference between an OLE DB Command on the Data Flow and the Execute SQL Task on the Control Flow...
You can get return values from oledb comand if you want to pipeline.
see this link for more details
http://josef-richberg.squarespace.com/journal/2011/6/30/ssis-oledb-command-and-procedure-output-params.html
The procedure should have an output parameter defined for that
I belive if you've flexibility of using stored procedure you may be better off doing this in execute sql task in control flow. Calling sp in data flow will cause it to execute sp once for each row in dataset whereas in controlflow it will go for set based
processing
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
How to improve stored procedure performance?
hi,
Suppose I have a stored procedure which contains 30 insert/update statements. How do I know Stored Procedure is slowly running or don't have any performance issue? how to improve performance?
Thanks in advance.
Anujit Karmakar Sr. Software EngineerStored Procedures Optimization Tips
Use stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to server only stored procedure name (perhaps with some parameters) instead of large heavy-duty queries text. Stored procedures can be used to enhance security and conceal underlying data objects
also. For example, you can give the users permission to execute the stored procedure to work with the restricted set of the columns and data.
Include the SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a Transact-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a Transact-SQL statement.
Call stored procedure using its fully qualified name.
The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name. Using fully qualified names eliminates any confusion about
which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names.
Consider returning the integer value as an RETURN statement instead of an integer value as part of a recordset.
The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason.
Using RETURN statement can boost performance because SQL Server will not create a recordset.
Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database.
The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend to use the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_"
in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. So, when you have the stored procedure with the prefix "sp_" in the
database other than master, the master database is always checked first, and if the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.
Use the sp_executesql stored procedure instead of the EXECUTE statement.
The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of your code when there are many parameters are used. When you use the sp_executesql stored procedure to
executes a Transact-SQL statements that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation.
Use sp_executesql stored procedure instead of temporary stored procedures.
Microsoft recommends to use the temporary stored procedures when connecting to earlier versions of SQL Server that do not support the reuse of execution plans. Applications connecting to SQL Server 7.0 or SQL Server 2000 should use the sp_executesql system
stored procedure instead of temporary stored procedures to have a better chance to reuse the execution plans.
If you have a very large stored procedure, try to break down this stored procedure into several sub-procedures, and call them from a controlling stored procedure.
The stored procedure will be recompiled when any structural changes were made to a table or view referenced by the stored procedure (for example, ALTER TABLE statement), or when a large number of INSERTS, UPDATES or DELETES are made to a table referenced by
a stored procedure. So, if you break down a very large stored procedure into several sub-procedures, you get chance that only a single sub-procedure will be recompiled, but other sub-procedures will not.
Try to avoid using temporary tables inside your stored procedure.
Using temporary tables inside stored procedure reduces the chance to reuse the execution plan.
Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
Using DDL statements inside stored procedure reduces the chance to reuse the execution plan.
Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary
each time it is run from the stored procedure because in this case the wrong execution plan will not be used.
Use SQL Server Profiler to determine which stored procedures has been recompiled too often.
To check the stored procedure has been recompiled, run SQL Server Profiler and choose to trace the event in the "Stored Procedures" category called "SP:Recompile". You can also trace the event "SP:StmtStarting" to see at what point
in the procedure it is being recompiled. When you identify these stored procedures, you can take some correction actions to reduce or eliminate the excessive recompilations.
http://www.mssqlcity.com/tips/stored_procedures_optimization.htm
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/ -
Performance Issues with crystal reports 11 - Critical
Post Author: DJ Gaba
CA Forum: Exporting
I have migrated from crystal reports version 8 to version 11.
I am experiencing some performance issues with reports when displayed in version 11
Reports that was taking 2 seconds in version 8 is now taking 4-5 seconds in versino 11
I am using vb6 to export my report file into pdf
ThanksPost Author: synapsevampire
CA Forum: Exporting
Pleae don't multiple forums on the site with the same question.
I responded to your other post.
-k -
Performance issue of BI Reports in SAP Enterprise portal -in SAPNW2004s
Dear friends,
We are integrating BI Reports in SAP Enterprise Portal 7.0 ( SAP NW- 2004s).The reports are running properly .But the issue here is reports are taking long time to open and leading it to performance effect.
Reports in BEX( Bi side) working lilttle better than EP platform.
And Even BI team is looking for ways to improve the performance.
Could you please share your ideas to implement in portal side to increase the performance.
Thanks and Regards
Ratnakar ReddyHello Mr. Reddy,
There are two possibilities for slow performance in BW reports, so we need to look into wether its slow in the BW system or at the frontend.
If the problem resides in the BW system then we should be able to trace the reports
and you can go for a SAP EW or GV sessin and SAP will provide recommendations.
If the problem resides at the frontend
Update the frontend servers to the latest frontend release.
Recommended Frontend Release 700
Recommended Frontend Patch 18
Update the frontend servers to the lates SAP GUI release as soon as possible.
Minimum Recommended SAP GUI Release 6.40
Your frontend PCs should fulfill the following requirements:
Each frontend PC should have 500 MHz and 128 MB main memory.
Because of a limit in the addressable memory, Windows 95 is not supported as Frontend OS for 3.X BW Systems. Please refer to SAP Notes 161993, 321973 and 366626 for details.
Please also check SAP Note 147519 "Maintenance strategy/ deadlines 'SAPGUI'".
If you still require any assistance from SAP support then raise a message under component ( probably BW-BEX-ET-WEB).
Provide your input, if you have any.
Thank you,
Tilak -
Performance issue of BI reports in SAP Enterprise portal
Dear Friends,
We have integrated BI reports with SAP Enterprise portal 7.0.Reports are running properly But the issue is reports are taking more time to dispsaly its content and leading it to performance effect.
In Bex ( BI side) reports performance is little better than SAP EP platform. BI Team also looking for ways to improve performance at BI side.
Could you please share your valuable ideas to improve the performance at SAP EP side also ..
Thanks and Regards
Ratnakar ReddyHi ratnakar,
The first step is to identify which component is causing the performance problem. Run your report in the portal but try appending the string &PROFILING=X in the end of the URL. This will generate BI statistics which you can use to see which component (Java stack, ABAP stack, Database) is causing the performance issue.
Hope this helps. -
Performance issue with HRALXSYNC report..
HI,
I'm facing performance issue with the HRALXSYNC report. As this is Standard report, Can any body suggest me how to optimize the standard report..
Thanks in advance.
Saleem Javed
Moderator message: Please Read before Posting in the Performance and Tuning Forum, also look for existing SAP notes and/or send a support message to SAP.
Edited by: Thomas Zloch on Aug 23, 2011 4:17 PMSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later -
Strange performance issue in bex report
Hello Experts,
I have a performance issue on my bex report.
I'm running the report with below selection criteria and getting 'too much data' error.
Country : equals EMEA
Category: not equlas 13
Date : 02/2010 to 12/2010.
But when I ran the report for smaller date ranges the number of records are not exceeding 13000.
02.2010 - 06.2010 - 6,555 rows
07.2010 - 09.2010 - 3,671 rows
10.2010 - 12.2010 - 2,780 rows
I know excel can't fit more than 65000 records, but I'm expecting 13000 records for my wide date range which excel can easily fit.
Any ideas on this one will be appreciated.
Regards,
Brahma ReddyHi,
For Question 1:
In query designer Go to the Query properties and select the tab "Variable Sequence", here you can set the order of variables as per you requirement.
For Question 2:
There will be a option "Hide Repeated Key values", if you uncheck this option then you will have the values for each row even though the material values are same.
Note; if you are viewing the report in web or WAD report you need to make the same changes in the Web template also because the settings in the query designer will be overridden when you run the query in web.
Hope this helps.
Regards,
Rk. -
Stored Procedure parameter (@Carrier) used in report and can't be set via code
I have a report that has regular Crystal parameters that I am setting correctly via code. However, one report actually uses one of the database parameters from the stored procedure that the report was created from. It is the only report like this and I'm using the same code in an attempt to set it's value. Although no error is thown, if I look at the parameter value in the IDE it is set correctly, but the field on the report just shows up blank. I have changed the way the report is created. Previously, I used the report.export method to create the actual file via a stored procedure. Now, I'm running the stored procedure first and creating a datatable. This allows me to execute the SP only once to see if it has data, because only then do I want to create the actual report. I'm using the SetDataSource method to pass the datatable into Crystal and then using the report.export method to create the report with data. Everything seems to work, except this stored procedure parameter (@Carrier) is not actually being populated to display on the report.
Not sure what to look at. Any suggestions?
Thanks.crpe32.dll is version 13.0.5.891. This was developed in VS2012 and VB.NET. I'm using ADO.Net to connect to a MS SQL Server database.
MainReport.SetDataSource(DTbl)
bRC = PopulateAllSubReports(MainReport)
If Not bRC Then Throw New Exception("Received an error in PopulateAllSubReports.")
bRC = PopulateCrystalParameters(MainReport, SP)
If Not bRC Then Throw New Exception("Received an error in PopulateCrystalParameters.")
'Actually create the output file.
bRC = ExportData(MainReport)
Private Function PopulateCrystalParameters(myReportDocument As ReportDocument, SP As ReportStoredProcedureCrystal) As Boolean
Dim myParameterFieldDefinitions As ParameterFieldDefinitions = Nothing
Dim myParameterFieldDefinition As ParameterFieldDefinition = Nothing, ParamValue As String = ""
Dim bRC As Boolean, Param As SqlParameter = Nothing
Try
myParameterFieldDefinitions = myReportDocument.DataDefinition.ParameterFields
'*********************Report Parameters***************************
For Each myParameterFieldDefinition In myParameterFieldDefinitions
myParameterFieldDefinition.CurrentValues.Clear()
Select Case myParameterFieldDefinition.ParameterFieldName.Trim.ToUpper
Case "@CARRIER"
If SP.DBParameters.ContainsKey("@CARRIER") Then
Param = SP.DBParameters("@CARRIER")
ParamValue = NullS(Param.Value).Trim
bRC = SetCurrentValueForParameterField(myParameterFieldDefinition, ParamValue)
If Not bRC Then Return False
End If
End Select
Next
Return True
Catch ex As Exception
GmcLog.Error("ReportKey = " & Me.ReportKey.ToString & ", " & ex.Message, ex)
Return False
End Try
End Function
Private Function SetCurrentValueForParameterField(myParameterFieldDefinition As ParameterFieldDefinition, submittedValue As Object) As Boolean
Dim currentParameterValues As ParameterValues = Nothing
Dim myParameterDiscreteValue As ParameterDiscreteValue = Nothing
Try
myParameterDiscreteValue = New ParameterDiscreteValue
myParameterDiscreteValue.Value = NullS(submittedValue).Trim
currentParameterValues = New ParameterValues
currentParameterValues.Add(myParameterDiscreteValue)
myParameterFieldDefinition.ApplyCurrentValues(currentParameterValues)
Return True
Catch ex As Exception
GmcLog.Error("ReportKey = " & Me.ReportKey.ToString & ", " & ex.Message, ex)
Return False
Finally
myParameterDiscreteValue = Nothing
currentParameterValues = Nothing
End Try
End Function
Private Function SetDBSourceForSubReport(mySubReport As ReportDocument) As Boolean
Dim myTables As Tables = Nothing, myTable As Table = Nothing, DTbl As DataTable, SP As StoredProcedure = Nothing
Try
myTables = mySubReport.Database.Tables
For Each myTable In myTables
Dim SPName As String = myTable.Location.Substring(0, myTable.Location.IndexOf(";"c))
SP = New StoredProcedure(ConnectionString, SPName, CommandType.StoredProcedure)
DTbl = SP.FillTable
mySubReport.SetDataSource(DTbl)
SP = Nothing
Next
Return True
Catch ex As Exception
GmcLog.Error("ReportKey = " & Me.ReportKey.ToString & ", " & ex.Message, ex)
Return False
Finally
If Not SP Is Nothing Then SP = Nothing
If Not myTable Is Nothing Then myTable = Nothing
If Not myTables Is Nothing Then myTables = Nothing
End Try
End Function
Private Function PopulateAllSubReports(myReportDocument As ReportDocument) As Boolean
Try
Dim mySections As Sections = myReportDocument.ReportDefinition.Sections
For Each mySection As Section In mySections
Dim myReportObjects As ReportObjects = mySection.ReportObjects
For Each myReportObject As ReportObject In myReportObjects
If myReportObject.Kind = ReportObjectKind.SubreportObject Then
Dim mySubreportObject As SubreportObject = CType(myReportObject, SubreportObject)
Dim subReportDocument As ReportDocument = mySubreportObject.OpenSubreport(mySubreportObject.SubreportName)
Dim bRC = SetDBSourceForSubReport(subReportDocument)
If Not bRC Then Return False
End If
Next
Next
Return True
Catch ex As Exception
GmcLog.Error("ReportKey = " & Me.ReportKey.ToString & ", " & ex.Message, ex)
Return False
End Try
End Function
Maybe you are looking for
-
How to recover the "GMail" Safari extension icon?
I seem to have lost the "GMail" Safari extension icon. This is the one with the plain "envelope" outline (NOT the "GMail Counter" one). It originally appeared as a selection in the Safari/View/Customize Toolbar window, but is now gone. And that parti
-
Ipod lost my photos after they were uploaded from camera connector
Hi, i hope someone out there can help. I have used the camera connector with my camera (Olympus mju 700) and ipod (60 Gb Video) many times without any problems but the ipod has just lost the last roll i uploaded onto it. After the upload i went throu
-
my copy function is not copying! It seems that the copy buffer does not get filled with the copied objects. When pasted, it always pastes a blank space) any ideas of where I can start troubleshooting?
-
SOAP Adapter - Guaranteed delivery
Hi ppl, I have a requirment where we are exploring the possibility of the source system being able to expose its data in http format so that we can get use XI SOAP adapter to get the same over https. As SOAP adapter would access the data through WAN,
-
Process to Migrage or convert SmartForms to Adobe Forms
Hello Friends, Could you please let me know the process(steps) to convert smartforms to AdobeForms. I got to know there will be some formatting issues(by reading other Threads). But iam querious to convert the smartforms into adobeforms and i really