Bad reporting performance after compressing infocubes

Hi,
as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
A - no compression
B - compression, but no partitioning
C - compression, one partition for each year
D - compression, one partition for each month
After loading 135 requests and compressing the cubes, we get this amount of data:
15.6 million records in each cube
Cube A: 135 partitions (one per request)
Cube B:   1 partition
Cube C:   8 partitions
Cube D:  62 partitions
Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
Thanks for your replies,
Knut

Hi Björn,
thanks for your hints.
1. after compressing the cubes I refreshed the statistics in the infocube administration.
2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
Any further ideas?
Greets,
Knut

Similar Messages

  • Incorrect results after compressing a non-cumulative InfoCube

    Hi Gurus,
    In BI 7.0 After compressing the non cumulative InfoCube its showing the incorrect reference points .leis_03_bf (pintail stock moments)  showing as the reference points(opening Balance) after compressing  as no marker update. Due to this its showing in correct result in reporting.please suggest me .
    Thanks
    Naveen

    Hi Nirajan,
    First of all as I undestood 2LIS_03_BX is the initial upload of stocks, so there is no need of delta load for this datasource, it collects data from MARC, and MARD tables when running the stock setup in R3 and you ahve to load it just one time.
    If between delta loads of 2LIS_03_BF you're loading full updates you are dupplicating material movements data, the idea of compression with marker update is that this movements affects to the stock value in the query, it's because of that when you load the delta init you do it without marker update because these movements are contained in the opening stock loaded with 2LIS_03_BX so you dont want to affect the stock calculation.
    You can refer to the How to handle Inventory management scenarios in BW for more detail on the topic.
    I hope this helps,
    Regards,
    Carlos.

  • Delete a particular request from a InfoCube  after compression  in sap bi 7

    HI ,
    Issue : I wand to delete a particular request from a InfoCube  after compression  in sap bi 7.0 ; (not in sap bow 3.x).
                can any one suggests me how to do .give me possible solutions.
    Regards,
    EdK...

    Hi,
    You can delete the perticular request from Infocube by useing'RSICCONT'.
    Goto SE16, give the 'RSICCONT' then Select your Cube and Select Execuite.
    It will display the list of requests. you can select the perticular request and deleted from cube.
    Edited by: Suman Reddy.Vuyyuru on Mar 18, 2010 12:38 PM

  • EP - bad performance after restart

    Hi Guys,
    i have some troubles with EP performance after restart.
    When the user logs into EP after restart, the loading of the pages takes really a lot of time.
    After some time are the pages cached and the performance is OK.
    Is there some way how to overcome this drawback?
    Some intial caching would be great but i don't
    know how to set it up.
    Could you help me with solving of this issue?
    Thank you and Best Regards,
    Dan
    The reward points are waiting

    Hi Daniel
                           Please got hrough the following links:
    https://forums.sdn.sap.com/click.jspa?searchID=1558391&messageID=1150316
    https://forums.sdn.sap.com/click.jspa?searchID=1558433&messageID=3078540
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d164c5ee-0901-0010-4fbf-d0856b5c8a84
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/22baa590-0201-0010-26a3-f1cfa2469973
    Thanks,
    Cheers.

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report Performance for GL item level report.

    Hi All,
    I have a requirements to get GL line items
    report based on GL Line items so have created data model like 0FI_GL_4->DSO-> cube and tested everything is fine but when execute in production the report performance is very bad.
    Report contains document number, GL act, comp.code, posting date objects.
    I have decided to do as follows to improve reporting performance
    ·         Create Aggregate on Document, GL characteristic
    ·         Compression.
    Can I do aggregates 1st then do the compression.
    Please let me know if I missing out anything.
    Regards,
    Naani.

    Hi Naani,
    First fill the Aggrigates then do Compression,run SAP_INFOCUBE_DESIGN Check the size of Dimension maintain Line item, High cordinality to the dimension, Set Cahe for query in RSRT,
    Try to reduce Novigational Attr in report. Below document may help you.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6071ed5f-1057-2e10-deb6-d3426fec0219?QuickLink=index&…
    Regards,
    Jagadeesh

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • BI Reports Performance

    Hi All,
    Very Good Morning......
    In my client already implemented Production sever in BI, write know getting problem for BI reports open and running it's taking more time..around 10 to 15 mins...
    In my client following this not build
    a. Aggregates ia not created
    b. Still know is not Compress in cube
    Any reasons will be there...........
    I have check RSRT also, Cache Monitor Local 100 MB and Global 200 MB.....its fine ...
    I am not able to understand, what can I do, because this issue its very important in my client...
    Please provide me good solutions ASAP........
    Thanks & Regards,
    Vijaay.....

    Hi,
    Do the following things.
    1. Increase Hardware recources.
    2. Check network Trafic.
    3. Create Aggregates, Indexs, DB Stats on CUbe and DSO's
    4. Compress InfoCube.
    5 .Check Chachy in RSRT.
    See good help on Performance.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/46/9fa4c7457f2c07e10000000a114a6b/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/06/b5f8926ba22b45bc9eaa589f1c835b/content.htm
    Thanks
    Reddy

  • What are the reporting performances can we do?

    What are the reporting performances can we do?

    Hi,
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    Generate Report in RSRT  
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Hope this helps.
    Thanks,
    JituK

  • Bad NCP Performance vs CIFS

    Our enviroment :
    OES 11 SP1
    Connect via 10Gbit Fibre (direct attached cable to switch)
    Clients WINDOWS 7
    Connect via 1 Gbit
    We se Bad NCP perfomance on the same Volume vs CIFS
    While copying a file from the workstation to a volume via NCP we get about ~45 MB/s
    While copying a file from the workstation to the same volume via CIFS we get about ~100 MB/s
    NCP can be slower but thats a huge difference, any ideas to step further are welcome.
    regards

    Originally Posted by mrosen
    On 29.10.2013 21:16, Bob-O-Rama wrote:
    >
    > No idea.
    >
    > I know people have reported performance differences, in my experience
    > its not been substantial. Perhaps we have some super awesome network
    > or something. ;-) Which we do... because "network" is in my job
    > title.
    Does that mean you see ~ 100MB/sec via NCP on a GB conenction?
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    Untitled Document
    finally found the time to work a little on this. dedicated boxes (pretty old but decent hardware) on a dedicated LAN, no traffic apart from what i've triggered. the tests are far from real-life operations as it's just about copying a 3GB iso back and forth and doing something similar with "lan speed test lite" (totusoft.com). observations made as follows:
    on XP the NCP operations outperform CIFS operations all across the board. i get constant 70MB/s on both reads and writes via NCP, CIFS writes at about 60 and reads 50MB/s (server and workstations rebooted after each test).
    on W7 NCP performance is about the same as on XP, CIFS reads are slightly below 70MB/s but CIFS writes constantly at around 105MB/s.
    there some pretty confusing things i wonder about:
    in the past the FILE_COMMIT setting on the server just decided whether or not to honor client initiated commit requests (did this change?). copying a file with windows explorer never triggers a commit, so neither client nor server settings should be a factor here. in fact, the server setting really has no effect, but running W7 the client setting really makes a difference (though there's no NCP 59 verb leaving the client). the 70MB/s via NCP on W7 could only be reached with FILE_COMMIT being enabled on the client. while i set commit on and oplocks off all of my life anyway i'd be interested how setting commit off could drop transfer rates down to 60MB/s without a commit request on the wire regardless of this setting. obviously there are effects other than requesting a commit on a buffer-flush call made by an application. maybe something alan like to share with us...
    what makes the W7 CIFS writes significantly faster? i don't think that caching or lazy writes are a factor here as tests with files larger than server's or workstations' memory lead to identical results, i.e. continuous data flow on the network and a max of 2 seconds I/O on the drives after the copy has finished. no difference on calling "sync" or "fsync" after NCP / CIFS operations. not sure how this integrates with NSS, though.
    write caches on harddrives and array controller are disabled, btw.
    finally: does anyone have an idea what the FILE_FLUSH parameter in cifs.conf is about? setting it yes or no didn't make a difference for these tests, just being curious.

  • 0IC_C03 Issue, after compression, the data still in F table,what happening

    Dears,
      After I performed the compression on the InfoCube 0IC_C03, all the queries on this InfoCube don't exeucte or with horrible low performance.
      Any suggestions are appreciated.
    B.R
    Gerald

    Hi Gerald,
    I think there is no connection of compression of request and low performance of the queries. Infact Queries run faster after compression as of my knowledge. Do some other checks to get better performance.
    Regards,
    Krish

  • "get all new data request by request" after compressing source Cube

    Hi
    I need to transfer data from one Infocube to another and use the Delta request by request.
    I have tried this when data on Source Infocube was not compressed and it worked.
    Afterwards some requests were compressed and after that the delta request by request is transfering all the information to target Infocube in only one request.
    Do you know if this a normal behavior?  
    Thanks in advance

    Hi
    The objective of compression is it will delete all the request in your F table and moves data to E table.after compression you don't have request by request by data.
    This is the reason you are getting all the data in single request.
    Get data request by request works, if you don't compress the data in your Cube.
    If you want to know about compression, check the below one
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035d300-b477-2d10-0c92-f858f7f1b575?QuickLink=index&overridelayout=true
    Regards,
    Venkatesh.

  • Crystal Report Performance for dbf files.

    We have a report which was designed 5 -6 years ago. This report has 4 linked word doc and dbf file as datasource. This report also as 3 subreports. The size of field in dbf is 80 chars and couple of field are memo field. The report performance was excellent before we migrated the crystall report to 2008. After CR2008 the system changed and it is suddenly really slow. We have not change our reports so much it should have an influence on performance. When the user presses the preview button on printing tool window the control is transferred to Crystal. Something has happened inside black box of Crystal ( IMO ).   the dll we have are crdb_p2bxbse.dll 12.00.0000.0549 . The issues seems to be of xbase driver (not possible to use latest version of crdb_p2bxbse.dll and dbase files with memo fields).

    Hi Kamlesh,
    Odd that the word doc is opened before the RPT, I would think that the RPT would need to be opened first so it sees that the doc also needs to be opened. Once it's been loaded then the connection can be closed, CR embeds the DOC in the RPT so the original is no longer required.
    Also, you should upgrade to Service Pack 3, it appears you are still using the original release. SP1 is required first but then you should be able to skip SP2 and install SP3.
    You did not say what earlier version of Cr you were using? After CR 8.5 we went to full UNICODE support at which time they completely re-built the report designer and removed the database engines from the EXE and made them separate dll's now. OLE objecting also changed, you can use a formula and database field to point to linked objects now so they can be refreshed any time. Previously they were only refreshed when the report was opened.
    You may want to see if linking them using a database field would speed up the process. Other than that I can't suggest anything else as a work around.
    Thank you
    Don

  • 2004s Web report performance is not good ,though that of 3x web is OK.

    Hi,
    I feel 2004s Web report performance is bad, though that of 3x web is no problem (the same query is used.) it is worse than BEx analyzer.
    This query has more than 1,000 records and those queries that have many records result in the same bad performance.
    Of course there are many reason for this bad performance, please tell me your solution by which you solve the same problem like this.
    the SIDs of EP and BI is difference here.
    CPU is not consumed when 2004s web report is executed.
    And I have cancelled  virus scan to this web report...
    Kind regards,
    Masaaki

    It is bad, am sure it's down to the new .net and java based technology.  Aggregates are a way forwards though from what i've heard of the BI Accelerator this is the real way forwards.

  • Database migrated from Oracle 10g to 11g Discoverer report performance issu

    Hi All,
    We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
    In database 10g the report is working fine but the same report is not working fine in 11g.
    The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
    The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
    Please advise.
    Regards,

    Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
    How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
    How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
    Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
    HTH
    Srini

Maybe you are looking for

  • Can't Import Photos from Camera or iPhone

    This is a recent development. For some reason when I plug in my iPhone or my Canon Power Shot I can't load photos I've taken to iPhoto '09. When I switch users and go to my wife's iPhoto on the same computer, i have no problem loading photos with the

  • 6.0.1 asking for authentication

    I got this when trying to connect with my router after updating. Can't attach picture. It asks for username and password.

  • Mouse Issues

    Just downloaded and installed the latest update from Apple, and have noticed an issue remains with the touchpad mouse. If you close an application or window using command-Q or W, and then use only the far right side of the mouse, the cursor does not

  • How to start and finish a 16:9 project?

    Hi, I got this movie, what was shot on a 16:9 camera. I did import it to FCE2 and edit like normally i do ( i did not change any settings ) After that i did import it to IDvd. I'm using a 16:3 theme. And the problem is: When I preview the theme is ok

  • Can ODI use environment variables?

    Running on Linux/Unix, can ODI use environment variables? For example, export LANGUAGE="en", and anyway in ODI can get variable LANGUAGE?