Does platform affect report performance?

Hi
We have a report that is currently run on a unix reports server. It completes within 3 minutes , sometimes 5. We moved this to a windows based reports server, this takes more than an hour.
I tried to run this on my local report builder on windows and it takes extremely long. The same report with the same parameters runs fast on the unix server.
I created the trace file for this report, it seems like a lot of time is spent executign the queries. Why should a change in platform affect the query performace .. Any ideas please . This is a 6i report.
thanks

Usually no. But lot of parameters to be taken into consideration..
RAM , Free space avaialble, processor number, speed , latest operating system version/patches
Rajesh

Similar Messages

  • Does TotalPageCount affect report execution on production as well ?

    Hi... As part of the mass report generation process an employee report needs to be run on production every month. The process runs for almost 2000-3000 customers and generats report for each of them. The static copies of this report is any mailed to each of tehse customers. The repot design uses TotalPageCount in a formula in the Page header of the report. I know use of this formula affects performance during report viewing. But does the use of TotalPageCount affect the report execution time when the report is being generated in mass numbers on production and scenario of report viewing does not exist ?

    Thanks for your answers.
    In my case, yes, the PDF reports are generated and then seperately mailed/emailed to the customers. What are the ways you are referring to so that the report performance can be improved ?
    The java application calls the RAS with the required input parameters. The RAS execution code is abstracted and is not accessible to us. The PDF generated by RAS is stored at some physical location.
    For your reference, my main report uses many subreports. I have done a detailed analysis of the report and no way can i get away with these subreports
    There are some things I have noticed and would be glad if you could confirm the same :
    1. I notice that there are many print time and on demand formulas present in the subreports but are not used. Should I remove them ? Does there presence affect report performance ?
    2. Is there any way that performance of charts can be improved ?
    3. Does using a lot of shared variables affect report performance ?
    4. I have identified a few stored procs where the sorting can be transfered from report to stored procedure.
    5. Is there an alternative way i can create the same effect for Page(TotalPageCount) in the page header without affecting the look of the report and also improving the performance of the report ?
    Thanks in Advance !
    Edited by: Malvika_Sinha on Jan 25, 2012 5:31 PM

  • What happens to unused common table expressions ,Does this affect in performance or ?

    If I write a query with one or more common table expressions to which I
    don't actually refer in the query, do they just get pruned off or do
    they get executed regardless? how does it affect in performance
    Prem Shah

    Try below
    seems when the CTE is not refer in the query then statement inside CTE is not executing at all even if it nested CTE, see for your self
    Create table UserInfo
    UserId int primary key,
    UserName varchar(30)
    GO
    Create table UserInfo1
    UserId int primary key,
    UserName varchar(30)
    GO
    insert into UserInfo
    select 1001,'X1' union all
    select 1002,'X2' union all
    select 1009 ,'X9'
    GO
    insert into UserInfo1
    select 1001,'X1' union all
    select 1002,'X2' union all
    select 1009 ,'X9'
    GO
    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
    GO
    Begin tran
    select * from UserInfo1 where UserId between 1001 and 1009
    and UserName = 'XXXX'
    --Commit
    PRINT 'WITH out CTE access in select'
    SET STATISTICS IO ON
    ;WITH CTE1 AS
    (Select * From UserInfo1)
    select * From UserInfo
    PRINT 'WITH CTE access in select'
    ;WITH CTE1 AS
    (Select * From UserInfo1)
    select * From UserInfo a inner join CTE1 b on a.UserId=b.UserId
    Stats IO
        WITH out CTE access in select
        (3 row(s) affected)
        Table 'UserInfo'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        (1 row(s) affected)
        WITH CTE access in select
        (3 row(s) affected)
        Table 'UserInfo1'. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        Table 'UserInfo'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        (1 row(s) affected)
    Thanks
    Saravana Kumar C

  • EBS: fnd_conc_date is affecting report performance

    We have a XML Publisher report that uses a data template. We have setup a concurrrent program with a Date parameter that is defined with FND_STANDARD_DATE value set. And in the data template, we have defined it as varchar2 in order to overcome the "literal does not match string". But this causes a big performance impact. when we use this function, the reoprt takes 3 minutes to run and without this, it completes in 20 seconds.
    Is there any other workaround to overcome this issue.
    - Vasu -

    We have a XML Publisher report that uses a data template. We have setup a concurrrent program with a Date parameter that is defined with FND_STANDARD_DATE value set. And in the data template, we have defined it as varchar2 in order to overcome the "literal does not match string". But this causes a big performance impact. when we use this function, the reoprt takes 3 minutes to run and without this, it completes in 20 seconds.
    Is there any other workaround to overcome this issue.
    - Vasu -

  • An infoset in a Multiprovider affects system performance ?

    Dear One´s,
    I have an info object 0COORDER which is used in the infoset and multiprovider. If the same infoset which holds this info object is used in the multiprovider, then does it affects any performance ?
    Thanks for your concern,
    Raj

    Hi Raja,
                 It does not make any difference in the performance using an infoset in multiprovider.Its all depend on the join condition which you are using in the infoset which fetches the data.
    Regards
    Karthik

  • Does WEP Affect Performance?

    Hey guys. I'm going to implent WEP keys on my Linksys router because some unknown person is taking up my bandwith. I was just wondering if using a WEP affect wireless performance. What I mean is after I put the key on my wireless computer, will performance be downgraded because it has more security?
    Message Edited by LastXdeth on 08-16-2008 06:10 PM

    castor wrote:
    No it increases the wireless performance ..... not decreases .... as the unsecure network provides less strength & connection speed .......
    Yo Castor
    i dont actually agrees with the fact where it says - "less strength and connection " . its contradicting the fact as data packets with High Security Encryption might lost somewhere in the midway ! (quite philosophically!!).
    and if u see , you will get to see free flowing of packets and connection signals with unsecured networks . (Until , some Mowon jacks it up and Use it for as a Free HotSpot Internet Service ! sighs !)
    well , dats how my thoughts about it .
    Pe@c3
    Message Edited by Sean-Knight on 08-18-2008 05:06 PM
    "What u Give , is wht u better start expecting to take back".. - http://Forsakenbliss.wordpress.com

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • BW Report Performance, Accuracy of Data

    Hi,
    Can someone help give explanations to following questions :
    1.) Does BW Report show how current is my data?
    2.) What are the reason why the performance of my BW Report is slow?
    3.) What are the reason why my BW Report is have missing data?
    4.) Why is my BW Report have incorrect data?
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Thanks,
    Milind
    Locked - duplicate post and very generic questions
    Report performance and data quality
    Edited by: Arun Varadarajan on Apr 9, 2010 2:07 AM

    Hi,
    1) Does BW Report show how current is my data?
    Yes, Last refresh of your data stat in query Properties.Run report and check the details for last refresh.
    2.) What are the reason why the performance of my BW Report is slow?
    Reason could be:
    Poor Design
    Business Logic (Transformations)
    Nav attributes used in the reports
    Time dependent MD
    Aggregates missing
    Data Vol in the Cubes or DSO's
    http://wiki.sdn.sap.com/wiki/display/BI/SomeusefulT-CodeforBIperformancetuning
    3.) What are the reason why my BW Report is have missing data?
      Check the source system data and check mapping in transformation with all the business logic.
    4.) Why is my BW Report have incorrect data?
    Depedns if you are loading from FF or R/3 or your are cleansing the data once it enters in to BW.
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Check the Source system data in RSA3 and pick one Document and run the same document in BI.
    Thanks!
    @AK

  • FRS report performance issue

    Hello,
    We have a report developed in FRS in the below style.
    http://postimg.org/image/bn9dt630h/b9c2053d/
    Basically, all the dimensions are asked in POV. In the rows of the reports, we have two sparse dimensions that are drilled down to level 0 as shows in above report. The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages. We waited for 45 minutes and then had to cancel a report, when the same was run in local currency, it gave us our results in 30 seconds.
    My thinking is that there should be a better way of showing level 0 members than using "Descendants of Current Point of View for Total_Entity AND System-defined member list Lev0,Entity" as I presume what it does is get descendants as well as level0 members and then compare them. I have alternate hierarchies hence I am using this, isn't there a simple way of saying - just give me level 0 members of the member selected in POV ?
    I have used below parameters
    Connection - Essbase
    Suppress rows on Database connection server
    Regards,

    Hello,
    >> The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages.
    You are focusing on the report. The most likely reason is in the performance of the database. Ofcourse, you can reduce the query size and get your report performing again, but the root cause is likely the database design.
    I do not know a function to drill down to the level0 members of the selected POV member.
    If this is something different per user, then you might think about meta-read filters. They would remove all that is not granted.
    Regards,
    Philip Hulsebosch

  • How Can we improve the report performance..?

    Hi exports,
    I am learning the Business Objects XIR2, Please let me know How Can we improve the report performance..?
    Please give the answer in detailed way.

    First find out why your report is performing slowly. Then fix it.
    That sounds silly, but there's really no single-path process for improving report performance. You might find issues with the report. With the network. With the universe. With the database. With the database design. With the query definition. With report variables. With the ETL. Once you figure out where the problem is, then you start fixing it. Fixing one problem may very well reveal another. I spent two years working on a project where we touched every single aspect of reporting (from data collection through ETL and all the way to report delivery) at some point or another.
    I feel like your question is a bit broad (meaning too generic) to address as you have phrased it. Even some of the suggestions already given...
    Array fetch size - this determines the number of rows fetched at a single pass. You really don't need to modify this unless your network is giving issues. I have seen folks suggest setting this to one (which results in a lot of network requests) or 500 (which results in fewer requests but they're much MUCH larger). Does either improve performance? They might, or they might make it worse. Without understanding how your network traffic is managed it's hard to say.
    Shortcut joins? Sure, they can help, as long as they are appropriate. [Many times they are not.|http://www.dagira.com/2010/05/27/everything-about-shortcut-joins/]
    And I could go on and on. The bottom line is that performance tuning doesn't typically fall into a "cookie cutter" approach. It would be better to have a specific question.

  • Multi value attribute in IB - how does it affect system (crm 4.0)

    Hi,
    Is there any limitation when using Multi value attribute in set type which is used for Individual object in IB?
    I have succeed in implementation of Multi value variable feature in CRM 4.0 . this change is irreversible so I wish to know what are the _risks moving it to production.
    (We use IB VERY heavily with single value attributes)
    Does it affect performance? other features?
    Thanks
    Rachel

    hi there
    due to the pain of sending the single value attribute in production,we usually try create multi value attribute so that multidimensional usefulness takes place.
    i really don see any limitation and risk sending those in production,infact it will serve u a good cause there
    just procede with that,so far we havent observed any defect doing that in projects
    best regards
    ashish

  • Hyperion Interactive reporting performance issue.

    Hi,
    We created a report in Hyperion Interactive reporting using Hyperion Essbase as database connection file .
    Report performance was good in Interactive reporting Studio we don't have any problem in studio.
    when we open the the report in Hyperion Workspace We are facing performance issue of the report and also when i hit refresh button to refresh data in the Workspace,i am getting the following error message
    *"An Interactive Reporting Service error has occurred - Failed to acquire requested service. Error Code : 2001"*
    Any suggestions to resolve this will be really helpful.
    Thanks in advance
    Thanks
    Vamsi
    Edited by: user9363364 on Aug 24, 2010 7:49 AM
    Edited by: user9363364 on Sep 1, 2010 7:59 AM

    Hi
    i also faced such an issue and then i found the answer on metalink
    Error: "An Interactive Reporting Service Error has Occurred. Failed to Acquire Requested Service. Error Code: 2001" when Processing a bqy Report in Workspace. [ID 1117395.1]     
    Applies to:
    Hyperion BI+ - Version: 11.1.1.2.00 and later [Release: 11.1 and later ]
    Information in this document applies to any platform.
    Symptoms
    Obtaining the following error when trying to process a BQY that uses an Essbase data source in Workspace:
    "An Interactive Reporting Service error has occurred. Failed to acquire requested service. Error Code: 2001".
    Cause
    The name of the data source in the CMC contained the machine name in fully qualified name format whereas the OCE contained the machine name only. This mismatch in machine names caused the problem. Making the machine name identical in both cases resolved the problem.
    Solution
    Ensure that the name of the data source as specified in the OCE in Interactive Reporting Studio matches the name specified in the CMC tool in the field "Enter the name of the data source".
    In fact, all fields need to match between the OCE and the CMC Data Source.
    regards
    alex

  • Make the relationship in between multiple table storage's tables will affect the performance

    Hi,
    i'm going to develop business application,the product ID needs to be generic one and it should automatically generate the unique id(like identity in sql ) but,it has to be generate in formatted way 
    for example the ID would be "cityCode+areaCode+uniqueNumber" . here, cityCode and areaCode are going to maintain in separate table. while generate the product id, going to find the cityCode table and AreaCode table the generate  unique
    number by merge all the respective information.
    1) while doing all this will affect the performance Azure table storage performance and  web application ?
    2) making multiple relationship among multi-Table Storage will decrease the performance ?. 

    Hello,
    When you say tables, are referring to Azure Storage Tables or Relational Databases?
    Please note Windows Azure tables do not function in the same manner as tables in a relational database since they do not make use of relationships or have schemas.
    And if you are referring to relational databases, the latency in performance would depend on the logic used to generate the unique ID.
    You should be able to use the logic in an On-Prem SQL database and check for the latency.
    Regards,
    Malar.

  • 2014 SSRS Reports Performance issues

    Hi All,
    After upgrading SQL 2008 reports to SQL 2014 i observed there is performance lag in 2014 ssrs reports,
    2008 reports which used to render in <2 secs now taking >50 secs,
    After doing some checks on why this lag occurred, i found that this is because of the expressions in reports, If i remove all the expressions then report renders < 2 secs otherwise its taking >50 secs 
    My question here is, we used the same expressions in 2008 version also which displays the report in <2 secs but why same thing taking more time in 2014 version.
    Is expressions handling in 2008 and 2014 different.
    and below are the expressions used in the both the versions
    IIF(ISNOTHING(Fields!Comp.Value),"-",Fields!Comp.Value) 
    IIF(ISNOTHING(Fields!Base.Value),"-",Fields!Base.Value)
    IIF(ISNOTHING(Fields!Var.Value),"-",Fields!Var.Value)
    iif(Fields!check.Value=true,"yellow","Transparent")
    Thanks in advance 
    Chandra.

    Hi Chandra,
    According to your description, the same report render slower in SQL Server 2014 than in SQL Server 2008.
    In both SSRS 2008 and SSRS 2014, the expression is processed in the same way. In Reporting Services, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. To analyze which section take much time, we can check the
    table Executionlog3 in the ReportServer database. For more information, Please refer to this article:
    More tips to improve performance of SSRS reports.
    After check out which section costs most of time, then you can refer to this article to optimize your report:
    Troubleshooting Reports: Report Performance.
    If possible, please share some information about your report migration.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Will BW indexing help Webi reports performance

    Hi Experts,
    Environment
    BO Version:4.1 SP3
    Reporting Database:BW 7.4 sp3
    Recently we have applied the OSS note :. 0001898395 so that we enabled index on the master data.
    We have applied the note and had the index on the 0profit_ctr in the Infoobject: 0mat_plant.
    Reports are fast in BEX side but there is no difference in webi reports performance.
    Our opinion is if BEx performance is increased obviously WEbi performance also should increase.
    Please post your experiences..
    Thank you,,

    Several things will affect the report speed. Note that there are two halves to it - the data source fetch time and the report render time.
    Data source fetch time is the time that it takes for all the data to be returned. BO will always look slower than the native tool because the native tool starts spooling out results as soon as it retrieves the first one - the more data returned, the slower BO will look by comparison.
    Report render time will depend upon the number and complexity of variables as well as the number of rows returned.
    Reducing the volume of data with correct aggregations and filters rather than summing and filtering at report level are the two best ways to improve performance without database/universe access.

Maybe you are looking for