Report Performance / Grouptree - Show GroupTree Expanded

Hi there....
I'm using Visual Studio / C# for an ASP.net app.
Requirements are to display customer-created reports placed by admin in virtual IIS directory. This is working fine.
However, we are experiencing some performance problems on a certain report written against a stored procedure.  When I run the stored procedure interactively, it has subsecond response. Interactive design of report with Crystal 10 displays refresh very quickly. But web page is slow. Every click on grouptree causes the report to regenerate.
Here are my questions
1. For performance , Should I simply store data with report, and refresh data upon loading the reportdocument
2. Are there other options that I can set that do not cause Crystal to refresh upon every keypress/search/ etc.
3. Can I cause crViewer to display the Group tree as expanded (simliar to asp:treeview1.expandall()
Thanks!

Thanks for your response Adam. In the time since I posted this thread, I have been able to exploit some of the features of the report document class, which is very nice in getting report parameters setting authentication etc.
But the best I could do for my performance / postback issue was to simply use exportOptions to file pdf, excel..
  DiskFileDestinationOptions diskOpts = new DiskFileDestinationOptions();
            ( ( DiskFileDestinationOptions )doc.ExportOptions.DestinationOptions ).DiskFileName = Server.MapPath(_rptPath)
            reportDoc.Export();
Then, I simply redirect to the newly created pdf
Can I use crViewer in conjunction with reportDcoument class?

Similar Messages

  • Report Performance Issue and Strange Execution Log Data

    Today we have had a report suddenly start taking a long time to execute.
    Looking at the Report Server executionLog3 table/view we have the following information for the query in question. 
     <Connection>
          <ConnectionOpenTime>1</ConnectionOpenTime>
          <DataSets>
            <DataSet>
              <Name>ReportDataset</Name>
              <RowsRead>7</RowsRead>
              <TotalTimeDataRetrieval>150013</TotalTimeDataRetrieval>
              <ExecuteReaderTime>3</ExecuteReaderTime>
            </DataSet>
          </DataSets>
        </Connection>
    Supposedly the time taken to retrieve the data is around 150 seconds.  However, running a profiler trace while running the report in SSRS shows the query executing in under 1 second.  
    Indeed running a profiler trace for anything on the server with a duration greater than 60 seconds isn't returning anything.  I can only assume the above data is wrong when it says 150 seconds to retrieve the data.  IT IS taking that long to run
    the report though - so the question is - where is the time going? 
    Why can't I find a slow query on the server but SSRS thinks there is? 
    LucasF
    EDIT: This was fixed by restarting the report server.  Any ideas on why this might occur? 

    Hi Lucas,
    According to your description, you find the <TotalTimeDataRetrieval> in ExecutionLog3 is larger than the profiler trace time.
    In Reporting Services, to analyze the performance of the report, we usually check the TimeDataRetrieval to find the time we spend on retrieving the data. It’s the time needed for SQL Server to retrieve the data of all datasets in your report. So in your
    scenario, please check if the TimeDataRetrieval is equal to the time in profiler trace.
    Reference:
    More tips to improve performance of SSRS reports
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Report Performance - timeout short dump

    Hello Experts,
    i am trying to improve the performace of a report that was developed long time ago.
    Issues i found:
    1. The report has many select...Endselect combinations, and selects inside the loop statements.
    2. Most of the selects have the addition 'into corresponding fields of' for selecting a few fields, without  the table addition.
    3.  Also few selects have the 'select * from'  syntax.
    data: begin of itab occurs 0,
              f1,
              f2
              f3.....
              fn,          
            end of itab.
    Ex: loop at itab.
             select f1 f2 f3 from table1
                   into corresponding fields of itab1.
               collect itab1.
             endselect.
              select f4 f5 from table2
                  into corresponding fields of itab2.
               endselect.
          endloop.
    All this leeds to performace issues.
    i have checked ST05, and i have got the details of the error.
    My question is which one of the reasons i mentioned above are a major factor in delaying the report performance?
    Which one of the above should i conetrate first to get the long runtime down? My goal is to keep my changes to the minimum and improve the performance. Please advise.

    > My question is which one of the reasons i mentioned above are a major factor in delaying the report
    > performance?
    Don't ask people for guesses, if you can see the facts!
    Run the SQL Trace several times, and use go to 'Trace List' -> 'Summarize Trace by SQL Statement'
    => Shows you total DB time and time per statement (all executions), the problems are on top of the list.
    Check ABAP, detail, and explain!
    Read more here:
    /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
    Siegfried

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • BW Report Performance, Accuracy of Data

    Hi,
    Can someone help give explanations to following questions :
    1.) Does BW Report show how current is my data?
    2.) What are the reason why the performance of my BW Report is slow?
    3.) What are the reason why my BW Report is have missing data?
    4.) Why is my BW Report have incorrect data?
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Thanks,
    Milind
    Locked - duplicate post and very generic questions
    Report performance and data quality
    Edited by: Arun Varadarajan on Apr 9, 2010 2:07 AM

    Hi,
    1) Does BW Report show how current is my data?
    Yes, Last refresh of your data stat in query Properties.Run report and check the details for last refresh.
    2.) What are the reason why the performance of my BW Report is slow?
    Reason could be:
    Poor Design
    Business Logic (Transformations)
    Nav attributes used in the reports
    Time dependent MD
    Aggregates missing
    Data Vol in the Cubes or DSO's
    http://wiki.sdn.sap.com/wiki/display/BI/SomeusefulT-CodeforBIperformancetuning
    3.) What are the reason why my BW Report is have missing data?
      Check the source system data and check mapping in transformation with all the business logic.
    4.) Why is my BW Report have incorrect data?
    Depedns if you are loading from FF or R/3 or your are cleansing the data once it enters in to BW.
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Check the Source system data in RSA3 and pick one Document and run the same document in BI.
    Thanks!
    @AK

  • FRS report performance issue

    Hello,
    We have a report developed in FRS in the below style.
    http://postimg.org/image/bn9dt630h/b9c2053d/
    Basically, all the dimensions are asked in POV. In the rows of the reports, we have two sparse dimensions that are drilled down to level 0 as shows in above report. The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages. We waited for 45 minutes and then had to cancel a report, when the same was run in local currency, it gave us our results in 30 seconds.
    My thinking is that there should be a better way of showing level 0 members than using "Descendants of Current Point of View for Total_Entity AND System-defined member list Lev0,Entity" as I presume what it does is get descendants as well as level0 members and then compare them. I have alternate hierarchies hence I am using this, isn't there a simple way of saying - just give me level 0 members of the member selected in POV ?
    I have used below parameters
    Connection - Essbase
    Suppress rows on Database connection server
    Regards,

    Hello,
    >> The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages.
    You are focusing on the report. The most likely reason is in the performance of the database. Ofcourse, you can reduce the query size and get your report performing again, but the root cause is likely the database design.
    I do not know a function to drill down to the level0 members of the selected POV member.
    If this is something different per user, then you might think about meta-read filters. They would remove all that is not granted.
    Regards,
    Philip Hulsebosch

  • Apex report performance is very poor with apex_item.checkbox row selector.

    Hi,
    I'm working on a report that includes some functionality to be able to select multiple records for further processing.
    The report is based on a view that contains a couple of hundred thousand records.
    When i make a selection from this view in sqlplus , the performance is acceptable but the apex report based on the same view performes very poorly.
    I've noticed that when i omit the apex_item.checkbox from my report query, performance is on par with sqlplus. (factor 10 or so quicker).
    Explain plan appears to be the same with or without checkbox function in the select.
    My query is:
    select apex_item.checkbox(1,tan_id) Select ,
    brt_id
    , tan_id
    , message_id
    , conversation_id
    , action
    , to_acn_code
    , information
    , brt_created
    , tan_created
    from (SELECT brt.id brt_id, -- view query
    MAX (TAN.id) tan_id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created brt_created,
    TAN.created tan_created
    FROM (SELECT brt_id, id, to_acn_code, information, created
    FROM xxcjib_transactions
    WHERE tan_type = 'DELIVER' AND status = 'FINISHED') TAN,
    xxcjib_berichten brt
    WHERE brt.id = TAN.brt_id
    GROUP BY brt.id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created,
    TAN.created)
    What could be the reason for the poor performance of the apex report?
    And is there another way to select multiple report records without the apex_item.checkbox function?
    I'm using apex 3.2 on oracle 10g database.
    Thanks,
    Niels Ingen Housz
    Edited by: user11986529 on 19-mrt-2010 4:06

    Thanks for your reply.
    Unfortunately changing the pagination doesnt make much of a difference in this case.
    Without the checkbox the query takes 2 seconds.
    With checkbox it takes well over 30 seconds.
    The second report region on this page based on another view seems to perform reasonably well with or without the checkbox.
    It has about the same number of records but with a different view query.
    There are also a couple of filter items in the where clause of the report queries (same for both reports) based on date and acn_code and both reports have a selectlist item displayed in their regions based on a simple lov. These filter items don't seem to be of influence on the performance.
    I have also recreated the report on a seperate page without any other page items or where clause and the same thing occurs.
    With the checkbox its very very slow (more like 20 times slower).
    Without it , the report performs well.
    And another thing, when i run the page with debug on i don't see the actual report query:
    0.08: show report
    0.08: determine column headings
    0.08: activate sort
    0.08: parse query as: APEX_CMA_ONT
    0.09: print column headings
    0.09: rows loop: 30 row(s)
    and then the region is displayed.
    I am using databaselinks in the views b.t.w
    Edited by: user11986529 on 19-mrt-2010 7:11

  • Regarding Reports performance

    Hi All,
    Now i am working in Production system.
    How can i check the performance for reports.
    How can i find the report performance is low/high?
    Please let me know the details.
    Thanks
    Vasu.

    Hi Vasu,
    Use the search on the word  "performance", you will got lot of answers which are related to queries.
    There are lot of factrors involved regarding query performance :
    1. Install BW stats and analyze the queries
    2. Option of using aggregates is a good idea to consider
    3. you can multi providers to do reporting for better query performance
    4. Query Read Mode - Read when navigating and expanding hierarchies
    5. Check if indices under ORACLE are degenerated. OSS Note 323090
    OLAP Cache OSS Note # 456068
    Also take a look to the OSS Note 567746 'Composite note BW 3.x performance: Query & Web' in which you can find all the info required...
    Assign points if helpful
    Bye
    Dinesh

  • Report fetched by windows OS report "Performance by System"

    When we fetch the report from the reporting pane  in "Windows server operating server reporting" (Performance by System), sometimes the data shown in the column form comes in the yellow and red color (which shows warning and critical respectively).
    My question is the threshold for these performance rules are mentioned in those perfmon collection rule or the rules from which the alert gets generated.
    Appreciate your help.

    The threshold of these performance rules, click on the alert and display details then show threshold of this alerts.
    For more details, you can refer below link
    http://technet.microsoft.com/en-us/library/hh457556.aspx
    http://technet.microsoft.com/en-us/library/cc180267.aspx
    http://download.doubletake.com/_download/dt53/docs/RecoverNow/User%27s%20Guide/Content/SCOM.htm
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • What are the reporting performances can we do?

    What are the reporting performances can we do?

    Hi,
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    Generate Report in RSRT  
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Hope this helps.
    Thanks,
    JituK

  • Improving Report Performance

    Hi HTMLDB Team,
    I need some of the undocumented HTMLDB information to improve the performance of the Reports, which will be of useful to every one.
    I have a Customer table, which has 1 Million records in it. It has the following columns:
    First Name, Last Name, Customer ID, State.
    I have a Report Region (PL SQL Block returning the Query), which directly picks up records from this table.
    User can filter Customer Records by any of the above-mentioned column.
    With out any sorting I am getting the response in 2 seconds.
    But If I put Sort on any of these columns my response time goes to 1 min.
    I have index on all the columns too.
    If the user applies any filter like "State=NY" or "First Name=Balaji" I am getting the response in 2-3 seconds.
    Now I want to keep the Report Response time within 2-3 seconds. (Without any filter condition and sorting on some column say "First Name".)
    In order to achieve this I want to add dynamically some filter conditions to the original query. when there are no filter conditions specified by the user.
    My approach is based on the assumption that user will be shown only the first 500 records of the query results (This is the normal HTML DB report behavior and I don't want to increase the Report Region max count beyond 500)
    My Original Report region query is
    SELECT * FROM CUSTOMER ORDER BY FIRST_NAME. -- It is taking 1 minute to returns all 1 million records out of which we are showing only the first 500 records.
    If I rewrite this query as
    SELECT * FROM CUSTOMER WHERE FIRST_NAME < 'B' ORDER BY FIRST_NAME. -- Takes 2 seconds and returns more than 1000 records.
    Now I can use the second query to show the records if the user did not specify any filters and he will be shown only the first 500 records.
    Now my issue is if the user changes the Sort Order to "Last Name" then the above query will not work and I need to change my query as
    SELECT * FROM CUSTOMER WHERE LAST_NAME < 'B' ORDER BY LAST_NAME.
    Similarly if the user selects State as the Sort Order
    SELECT * FROM CUSTOMER WHERE STATE="AK" -- Will definitely give me > 500 records.
    Also I need to consider whether the user is sorting in Ascending or Descending order too.
    From where can I get this Report Sort Order information in HTML DB?
    If you can provide this information, it will be of great use to all HTML DB folks.
    Regards
    Balaji. C

    Hi!
    What exactly did you find at mentioned link regarding report performance ... it is not about performance issues for reports ... id is about navigation from a report row to a page.
    Please, can you tell more exactly what thing solved your solution?
    I have the same situation, a query runing about 1 second dirrectly on the database from SQL Developer, but from APEX page it takes about 55 seconds.
    Thank you.
    Edited by: bustiuci on Nov 15, 2008 9:46 PM

  • Reporting Performance

    Hi All,
    What are the reporting performance can we do ?
    Ciao
    Gigi

    Kumar,
       You recommend using Infosets to report on ODSs instead of multiproviders. (Bullet point #5)   What is the performance benefit for this preference?
    Thanks,
    Cynara
    > Hi Gigi,
    >
    > Basically, they do several functions:-
    >
    > 1)Aggregates
    > 2)OLAP cache
    > 3)Precalculated web templates
    > 4)Use small amount or result data as starting point
    > of any queries and do the drill down
    > 5)Avoid reporting on ODS and use Infoset containing
    > ODS for reporting
    > 6)If you use exclusion in reporting (<>), the indices
    > are not used; so avoid using the exclusion but use
    > inclusion
    > 7)Use read mode"H" read when navigating and expanding
    > hierarchies.
    >
    >
    > TQ
    > Kumar

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

  • Report Performance for GL item level report.

    Hi All,
    I have a requirements to get GL line items
    report based on GL Line items so have created data model like 0FI_GL_4->DSO-> cube and tested everything is fine but when execute in production the report performance is very bad.
    Report contains document number, GL act, comp.code, posting date objects.
    I have decided to do as follows to improve reporting performance
    ·         Create Aggregate on Document, GL characteristic
    ·         Compression.
    Can I do aggregates 1st then do the compression.
    Please let me know if I missing out anything.
    Regards,
    Naani.

    Hi Naani,
    First fill the Aggrigates then do Compression,run SAP_INFOCUBE_DESIGN Check the size of Dimension maintain Line item, High cordinality to the dimension, Set Cahe for query in RSRT,
    Try to reduce Novigational Attr in report. Below document may help you.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6071ed5f-1057-2e10-deb6-d3426fec0219?QuickLink=index&…
    Regards,
    Jagadeesh

Maybe you are looking for

  • My iPod is synced with another iTunes library?!?!

    We got a new computer a couple of months ago and I'm trying to download more tunes from iTunes. It's saying my iPod is synced with another iTunes library. It's asking if I want to erase this iPod and sync with this iTunes library. NO! I don't want to

  • Burn photo to dvd

    how can I burn Photo to a dvd

  • Locking a Table

    I just wanted to know , how the lock works on a table. I am very much confused.. Let me put down my requirement... I am inserting a new record in a Table(Batches) for which one of the columns(no_batch) is an incrementing value. To acheive this, I sel

  • Transitions  (Random) in Keynote.

    Why in the new Key Note that we still do not have "Random" Transitions. I find it very frustrating making a slide selection, and having to select each slide to get a different Transition. Then If I want to change the timing on all the Slides it chang

  • Locations selection criteria

    For an Essbase reporting app I will be loading data via FDM. They want certain accounts to be mapped and viewable only by certain Entities. Not all Accounts will be viewable based on the Entities selected. Metadata filters will be set in Essbase. I a