Report  performance with Hierarchies

Hi
How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
Thanks
P G

HI,
chk this:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Query Performance – Is "Aggregates" the way out for me?
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
Hope it helps!
tHAK YOU,
dst

Similar Messages

  • Report Performance with Bind Variable

    Getting some very odd behaviour with a report in APEX v 3.2.1.00.10
    I have a complex query that takes 5 seconds to return via TOAD, but takes from 5 to 10 minutes in an APEX report.
    I've narrowed it down to one particular bind. If I hard code the date in it returns in 6 seconds, but if I let the date be passed in from a parameter it takes 5+ minutes again.
    Relevant part of the query (an inline view) is:
    ,(select rglr_lect lect
    ,sum(tpm) mtr_tpm
    ,sum(enrols) mtr_enrols
    from ops_dash_meetings_report
    where meet_ev_date between to_date(:P35_END_DATE,'DD/MM/YYYY') - 363 and to_date(:P35_END_DATE,'DD/MM/YYYY')
    group by rglr_lect) RPV
    I've tried replacing the "to_date(:P35_END_DATE,'DD/MM/YYYY') - 363" with another item which is populated with the date required (and verified by checking session state). If I replace the :P35_END_DATE with an actual date the performance is fine again.
    The weird thing is that a trace file shows me exactly the same Explain Plan as the TOAD Explain where it runs in 5 seconds.
    Another odd thing is that another page in my application has the same inline view and doesn't hit the performance problem.
    The trace file did show some control characters (circumflex M) after each line of this report's query where these weren't anywhere else on the trace queries. I wondered if there was some sort of corruption in the source?
    No problems due to pagination as the result set is only 31 records and all being displayed.
    Really stumped here. Any advice or pointers would be most welcome.
    Jon.

    Don't worry about the Time column, the cost and cardinality are more important to see whther the CBO is making different decisions for whatever reason.
    Remember that the explain plan shows the expected execution plan and a trace shows the actual execution plan. So what you want to do is compare the query with bind variables from an APEX page trace to a trace from TOAD (or sqlplus or whatever). You can do this outside APEX like this...
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';Enter and run your SQL statement...;
    ALTER SESSION SET sql_trace=FALSE;This will create a a trace file in the directory returned by...
    SELECT value FROM v$parameter WHERE name = 'user_dump_dest' Which you can use tkprof to format.
    I am assuming that your not going over DB links or anything else slightly unusual?
    Cheers
    Ben

  • BEx Report Performance with selection-screen input

    Hello Gurus,
    My Bex report is working fine when the report had run with out PLANT filter in the selection-screen but when report had run with plant in the selection-screen , report running for forever.
    Please let me know what I need to do improve the performance.
    Saleem.

    Hi Saleem, Just a few thoughts;
    1. Check the M-table in RSD1 for 0PLANT. In Table View edit any blank or null values. Run the same restrictions you apply in the query at Info provider level > Display Data. If there's any lapse; you can judge where exactly the problem lies.
    2. If you are using Infocube & that your master is >20% fact; you can declare the Info object as 'Line Item Dimension'.
    3. Create Variants. Esp. if you are running the query for same set of data. Try Variable Preselection: You can restrict both the values + varaiables in the filter level. When you execute the values will be visibly pre-selected in selection screen.
    4. As discussed in previous messages, running a SQL trace using RSRT may prove useful.

  • SLOW report performance with bind variable

    Environment: 11.1.0.7.2, Apex 4.01.
    I've got a simplified report page where the report runs slowly compared to running the same query in sqldeveloper. The report region is based on a pl/sql function returning a query. If I use a bind variable in the query inside apex it takes 13 seconds to run, and if I hard code a string it takes only a few hundredths of a second. The query returns one row from a table which has 1.6 million rows. Statistics are up-to-date and the columns in the joins and where clause are indexed.
    I've run traces using p_trace=YES from Apex for both the bind variable and hard coded strings. They are below.
    The sqldeveloper explain plan is identical to the bind variable plan from the trace, yet the query runs in 0.0x seconds in sqldeveloper.
    What is it about bind variable syntax in Apex that is causing the bad execution plan? Apex Bug? 11g bug? Ideas?
    tkprof output from Apex trace with bind variable is below...
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM(:P71_SEARCH_SOURCE1)))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          1         27           0
    Fetch        2     13.15      13.22      67694      72865          0           1
    total        4     13.15      13.23      67694      72866         27           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=72869 pr=67694 pw=0 time=0 us cost=29615 size=14255040 card=178188)
          1   FILTER  (cr=72869 pr=67694 pw=0 time=0 us)
          1    HASH JOIN RIGHT SEMI (cr=72865 pr=67694 pw=0 time=0 us cost=26308 size=14255040 card=178188)
          1     INDEX FAST FULL SCAN IDX$$_0A300001 (cr=18545 pr=13379 pw=0 time=0 us cost=4993 size=2937776 card=183611)(object id 68485)
    1696485     TABLE ACCESS FULL PERSONS (cr=54320 pr=54315 pw=0 time=21965 us cost=14958 size=108575040 card=1696485)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     HASH JOIN (RIGHT SEMI)
          1      INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                     'IDX$$_0A300001' (INDEX)
    1696485      TABLE ACCESS   MODE: ANALYZED (FULL) OF 'PERSONS' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1276        0.00          0.16
      db file sequential read                       812        0.00          0.02
      direct path read                             1552        0.00          0.61
    ********************************************************************************Here's the tkprof output with a hard coded string:
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM('0b')))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.04          0          0          0           0
    Execute      1      0.00       0.00          0          0         13           0
    Fetch        2      0.00       0.00          0          8          0           1
    total        4      0.02       0.04          0          8         13           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=10 pr=0 pw=0 time=0 us cost=9 size=80 card=1)
          1   FILTER  (cr=10 pr=0 pw=0 time=0 us)
          1    NESTED LOOPS  (cr=8 pr=0 pw=0 time=0 us)
          1     NESTED LOOPS  (cr=7 pr=0 pw=0 time=0 us cost=8 size=80 card=1)
          1      SORT UNIQUE (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1       TABLE ACCESS BY INDEX ROWID PERSON_SYSTEMS (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1        INDEX RANGE SCAN IDX_PERSON_SYSTEMS_SOURCE_KEY (cr=3 pr=0 pw=0 time=0 us cost=3 size=0 card=1)(object id 68561)
          1      INDEX UNIQUE SCAN PK_PERSONS (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 68506)
          1     TABLE ACCESS BY INDEX ROWID PERSONS (cr=1 pr=0 pw=0 time=0 us cost=2 size=64 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     NESTED LOOPS
          1      NESTED LOOPS
          1       SORT (UNIQUE)
          1        TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                       'PERSON_SYSTEMS' (TABLE)
          1         INDEX   MODE: ANALYZED (RANGE SCAN) OF
                        'IDX_PERSON_SYSTEMS_SOURCE_KEY' (INDEX)
          1       INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_PERSONS'
                      (INDEX (UNIQUE))
          1      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                     'PERSONS' (TABLE)

    Patrick, interesting insight. Thank you.
    The optimizer must be peeking at my bind variables with it's eyes closed. I'm the only one testing and I've never passed %anything as a bind value. :)
    Here's what I've learned since my last post:
    I don't think that sqldeveloper is actually using the explain plan it says it is. When I run explain plan in sqldeveloper (with a bind variable) it shows me the exact same plan as Apex with a bind variable. However, when I run autotrace in sqldeveloper, it takes a path that matches the hard coded values, and returns results in half a second. That autotrace run is consistent with actually running the query outside of autotrace. So, I think either sqldeveloper isn't really using bind variables, OR it is using them in some other way that Apex does not, or maybe optimizer peeking works in sqldeveloper?
    Using optimizer hints to tweak the plan helps. I've tried both /*+ FIRST_ROWS */ and /*+ index(ps pk_persons) */ and both drop the query to about a second. However, I'm loath to use hints because of the very dynamic nature of the query (and Tom Kyte doesn't like them either). The hints may end up hurting other variations on the query.
    I also tested the query by wrapping it in a select count(1) from ([long query]) and testing the performance in sqldeveloper and in Apex. The performance in that case is identical with both bind variables and hard coded variables for both Apex and SqlDeveloper. That to me was very interesting and I went so far as to set up two bind variable report regions on the same page. One region wrapped the long query with select count(1) from (...) and the other didn't. The wrapped query ran in 0.01 seconds, the unwrapped took 15ish seconds with no other optimizations. Very strange.
    To get performance up to acceptable levels I have changed my function returning query to:
    1) Set the equality operator to "=" for values without wildcards and "like" for user input with wildcards. This makes a HUGE difference IF no wildcard is used.
    2) Insert a /*+ FIRST_ROWS */ hint when users chose the column that requires the sub-query. This obviously changes the optimizer's plan and improves query speed from 15 seconds to 1.5 seconds even with wildcards.
    I will NOT be hard coding any user supplied values in the query string. As you can probably tell by the query, this is an application where sql injection would be very bad.
    Jeff, regarding your question about "like '%' || :P71_SEARCH_SOURCE1 || '%'". I've found that putting wildcards around values, particularly at the beginning will negate any indexing on the column in question and slows performance even more.
    I'm still left wondering if there isn't something in Apex that is breaking the optimizer "peeking" that Patrick describes. Perhaps something in the way it switches contexts from apex_public_user to the workspace schema?

  • Querry reporting performance with exit varibles

    Hi friends,
    In the reporting I am making use of around three  customer exit variables.
    Basically, I am finding out the role of a logged in user and his org unit. Straightforward ABAP !
    Would it help me increasing on performance ?
    Thanks,

    Shreya,
    Customer exit variables will not add to the performance.Infact they take considerable amount of time
    based on its complexity.
    As long as the performance is nt bad and the requirement
    has to be met and theres no other go than customer exit variables , it shud be ok
    -Doodle

  • Interactive Report Performance With Conditional Link

    Apex 3.2
    I have a interactive report.
    The underlying sql would return 127000 rows
    The sql is
    select
      lde.ods_system,
      lde.ldekey,
      msg.sendersystem, 
      msg.messagetype,
      msg.messageversion,
      msg.msgseqnumber,
      msg.alternatekey,
      msg.crudmarker,
      msg.clrbookdate,
      msg.clrbookresult,
      lower('udf_'||msg.messagetype) button,
      lde.ldekey||'.'||msg.alternatekey||'.'||msg.msgseqnumber udm_key
    from
      clr_esbmessageheader msg,
      clr_adm_systemmessage adm,
      udm_lde lde
    where
      adm.ldeid = lde.ldeid and
      msg.sendersystem = adm.system and
      msg.messagetype = adm.messagetype and
      msg.messageversion = adm.messageversion and
      msg.receiversystem = 'SCIPS'
    order by msg.clrbookdate desc
    This report only takes 1 second to display.
    I need to add a conditional link to another page, so I used
    case
    when lower('udf_'||msg.messagetype) = 'udf_distreceipt' then
    '<a class="type" href="' || apex_util.prepare_url('f?p='||:APP_ID||':52:'||:APP_SESSION||'::'||:DEBUG||':RIR'||':IR_MSG_KEY,P52_PG:'|| lde.ldekey||'.'|| msg.alternatekey ||'.'|| msg.msgseqnumber ||','|| 50, null, 'SESSION') || '"title="Go to udf_distreceipt Report">udf_distreceipt</a>'
    else 'no link' end table_link
    The sql seems to be ok, because the report accepted it, but selecting the new column and saving the report takes forever (over 2 mins)
    Now the report takes over 2 minutes to run and I still need to add more conditions.
    Have I coded the link incorrectly ?
    Gus

    Hi Gus,
    Are you wanting to put the link in the query for a specific reason?
    I had to do a similar thing in the past and just completed the column link section for the column.
    Why not just have the following in the query:
    case
    when lower('udf_'||msg.messagetype) = 'udf_distreceipt' then
    udf_distreceipt
    else null END table_link
    Then do the linking using column link section:
    You would specify your link text as #TABLE_LINK# which should then be conditionally displayed due to the case statement, then add in all the page item and values to pass across using a normal link column.
    Thanks
    Paul

  • Performances with Crystal Reports (based on BW queries)

    Hi,
    I've created some Crystal reports based on BW queries, and I'm really interested in performances purposes. The BW queries I created are quick (with neither free characteristics nor hierarchies), and when I use them in a Crystal report, I identify a loss of time in the data extraction (BW --> Crystal), in the Crystal management of the layout and then in the publication to BOE.
    To be able to use Crystal on BW queries is very useful, but if the response times are too much important, that's not good...
    Is there any customizing to perform to get better performance ? Are there some analysis already performed with Performance point of view ?
    If you have any best practice or piece of advice to shorten the response time in Crystal reports, I'm really interested in.
    Thanks for your replies,
    Best regards
    Jonathan

    Hi,
    there are many SDN threads or forums dealing with the integration BW and Crystal.
    You should have a look to the forums created by Ingo Hilgefort (SAP expert on the BW-BO integration).
    You can start with that one :
    /people/ingo.hilgefort/blog/2008/09/17/businessobjects-and-sap--installation-and-configuration-part-1-of-4
    Best regards,
    Jonathan

  • Apex report performance is very poor with apex_item.checkbox row selector.

    Hi,
    I'm working on a report that includes some functionality to be able to select multiple records for further processing.
    The report is based on a view that contains a couple of hundred thousand records.
    When i make a selection from this view in sqlplus , the performance is acceptable but the apex report based on the same view performes very poorly.
    I've noticed that when i omit the apex_item.checkbox from my report query, performance is on par with sqlplus. (factor 10 or so quicker).
    Explain plan appears to be the same with or without checkbox function in the select.
    My query is:
    select apex_item.checkbox(1,tan_id) Select ,
    brt_id
    , tan_id
    , message_id
    , conversation_id
    , action
    , to_acn_code
    , information
    , brt_created
    , tan_created
    from (SELECT brt.id brt_id, -- view query
    MAX (TAN.id) tan_id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created brt_created,
    TAN.created tan_created
    FROM (SELECT brt_id, id, to_acn_code, information, created
    FROM xxcjib_transactions
    WHERE tan_type = 'DELIVER' AND status = 'FINISHED') TAN,
    xxcjib_berichten brt
    WHERE brt.id = TAN.brt_id
    GROUP BY brt.id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created,
    TAN.created)
    What could be the reason for the poor performance of the apex report?
    And is there another way to select multiple report records without the apex_item.checkbox function?
    I'm using apex 3.2 on oracle 10g database.
    Thanks,
    Niels Ingen Housz
    Edited by: user11986529 on 19-mrt-2010 4:06

    Thanks for your reply.
    Unfortunately changing the pagination doesnt make much of a difference in this case.
    Without the checkbox the query takes 2 seconds.
    With checkbox it takes well over 30 seconds.
    The second report region on this page based on another view seems to perform reasonably well with or without the checkbox.
    It has about the same number of records but with a different view query.
    There are also a couple of filter items in the where clause of the report queries (same for both reports) based on date and acn_code and both reports have a selectlist item displayed in their regions based on a simple lov. These filter items don't seem to be of influence on the performance.
    I have also recreated the report on a seperate page without any other page items or where clause and the same thing occurs.
    With the checkbox its very very slow (more like 20 times slower).
    Without it , the report performs well.
    And another thing, when i run the page with debug on i don't see the actual report query:
    0.08: show report
    0.08: determine column headings
    0.08: activate sort
    0.08: parse query as: APEX_CMA_ONT
    0.09: print column headings
    0.09: rows loop: 30 row(s)
    and then the region is displayed.
    I am using databaselinks in the views b.t.w
    Edited by: user11986529 on 19-mrt-2010 7:11

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • Report Completed with error...

    Hi There,
    Account Analysis Report in oracle ebs R12 completed with error please find below error message.
    Warning!!! Due to high volume of data, got out of memory exception...***
    ****Please retry with scalable option or modify the Data template to run in scalable mode...***
    java.util.logging.ErrorManager: 2:
    oracle.core.ojdl.LoggingException: oracle.core.ojdl.LoggingException: Attempt to flush a closed LogWriter
    at oracle.core.ojdl.ExceptionHandler.onException(Unknown Source)
    at oracle.core.ojdl.BufferedLogWriter.handleException(Unknown Source)
    at oracle.core.ojdl.BufferedLogWriter.flush(Unknown Source)
    at oracle.core.ojdl.logging.ODLHandler.flush(Unknown Source)
    at oracle.core.ojdl.logging.ODLHandler.publish(Unknown Source)
    at java.util.logging.Logger.log(Logger.java:472)
    at java.util.logging.Logger.doLog(Logger.java:494)
    at java.util.logging.Logger.log(Logger.java:517)
    at oracle.ias.cache.CacheInternal.logLifecycleEvent(Unknown Source)
    at oracle.ias.cache.CacheInternal.close(Unknown Source)
    at oracle.ias.cache.Cache.close(Unknown Source)
    at oracle.apps.jtf.cache.IASCacheProvider$CacheStopperThread.run(IASCacheProvider.java:1480)
    Caused by: oracle.core.ojdl.LoggingException: Attempt to flush a closed LogWriter
    ... 10 more
    Start of log messages from FND_FILE
    End of log messages from FND_FILE
    Executing request completion options...
    ------------- 1) PUBLISH -------------
    Beginning post-processing of request 1762394 on node ORAPRD at 07-APR-2012 10:24:48.
    Post-processing of request 1762394 failed at 07-APR-2012 10:24:49 with the error message:
    One or more post-processing actions failed. Consult the OPP service log for details.
    Finished executing request completion options.
    Concurrent request completed
    Current system time is 07-APR-2012 10:24:49
    its a very important report please help me out in this regards,
    Thanks in advance.
    Regards,
    Mohsin
    Edited by: 920138 on Apr 6, 2012 11:37 PM

    Please post the details of the application release, database version and OS.
    Can you find any errors in the OPP log file?
    Please see these docs.
    XLAAARPT: Account Analysis Report Errors with Warning: Due to High Volume Of Data Got Out Of Memory Exception Please Retry With Scalable Option Or Modify The Data Template To Run In Scalable Mode [ID 1304660.1]
    SLA: How to Troubleshoot XML Performance Issues in the Account Analysis & Journal Entries Report [ID 983063.1]
    How to Configure the Account Analysis Report in Release 12 for Large Reports [ID 737311.1]
    R12: XLAAARPT - Subledger Accounting Account Analysis Report, gets Out of Memory Exception, Retry with Scalable Option [ID 836551.1]
    R12: AP Trial Balance Complete With Warning The Output Post-processor is Running But Has Not Picked up This Request [ID 1224684.1]
    R12 Subledger Period Close Exception Report (XLAPEXRPT) Errors With: "out of memory exception" [ID 952747.1]
    Account Analysis Report Shows Results in XML [ID 1325208.1]
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues [ID 1410160.1]
    Thanks,
    Hussein

  • Item/Drill Report Performance hinderance

    I am having a problem with report performance. I have a report that I have to have 5 drop down menus on top of the report. It seems the more drop down menus I add, the slower the response time when the report is actually navigated. One of the drop downs has over 1,000 options, but the other 4 drop down menus have 4-5 options. Is there a way to improve performance?

    And this is different from yesterday how?
    Please help, Discoverer Performance.
    Russ provided a few possible reasons, and asked for a bit of detail. Instead of asking the same question again, respond to Russ and others, and provide a bit more information to how things are set up.

  • Crystal Report Performance for dbf files.

    We have a report which was designed 5 -6 years ago. This report has 4 linked word doc and dbf file as datasource. This report also as 3 subreports. The size of field in dbf is 80 chars and couple of field are memo field. The report performance was excellent before we migrated the crystall report to 2008. After CR2008 the system changed and it is suddenly really slow. We have not change our reports so much it should have an influence on performance. When the user presses the preview button on printing tool window the control is transferred to Crystal. Something has happened inside black box of Crystal ( IMO ).   the dll we have are crdb_p2bxbse.dll 12.00.0000.0549 . The issues seems to be of xbase driver (not possible to use latest version of crdb_p2bxbse.dll and dbase files with memo fields).

    Hi Kamlesh,
    Odd that the word doc is opened before the RPT, I would think that the RPT would need to be opened first so it sees that the doc also needs to be opened. Once it's been loaded then the connection can be closed, CR embeds the DOC in the RPT so the original is no longer required.
    Also, you should upgrade to Service Pack 3, it appears you are still using the original release. SP1 is required first but then you should be able to skip SP2 and install SP3.
    You did not say what earlier version of Cr you were using? After CR 8.5 we went to full UNICODE support at which time they completely re-built the report designer and removed the database engines from the EXE and made them separate dll's now. OLE objecting also changed, you can use a formula and database field to point to linked objects now so they can be refreshed any time. Previously they were only refreshed when the report was opened.
    You may want to see if linking them using a database field would speed up the process. Other than that I can't suggest anything else as a work around.
    Thank you
    Don

  • Bex Report Performance

    Dear Friends,
    I would like to know is the complex authorizations can also cause the Bex report performance.
    One of my scenerio is like there are two users A & B
    A is having relevant authorizations for reporting, Drill down etc which are required.
    B is having SAP All authorization.
    When the same report has been executed by both users on the same system.
    the data retrieved by user B(SAP_ALL authorization) is quite faster than User A.
    Its like ther diffference of about 10 minutes.
    There are some exsclude selections in report.
    So my conclusion is like the complex authorizations do also hampers the query performance.
    Please confirm & share your views.
    Thanks & Best Regards,
    Vivek Tripathi
    +91-9372313000

    Hi Vivek
         Can you help us understand what was the exact problem and how you resolved it / solution at Extraction / Modeling / Reporting end.
         I have a quite similiar issue with my report i have Header + Item report on Infoset
    u2022     Header report takes seconds and item report takes minutes
    u2022     The same report executed with exact parameter has inconsistent performance results meaning one time it takes 1 minutes next time same report same user and same authorization takes 5 minutes.
        Any help on this would be really greatfull. Suspecting is not an issue with the report at all , as no changes happened between the pre and post check.
    _Additional Information : _
    We Create Secondary -Bitmap index every week end i do not see that is one of the route cause.
    Except that we have our regular daily loads that are running for master data loads and transaction data loads in series.
       Thanks in Advance.
    Much Regards
    Jagadish Thirumalachetty.
    Edited by: Jagadish Thirumalachetty on Jul 14, 2010 1:35 PM

Maybe you are looking for

  • Video out on ipod nano 7th generation

    I am trying to determine if I can play video out to TV with the latest ipod nano (7th gen). I found a support page( iPhone, iPad, iPod: TV out support) which suggests its possible from 3rd gen onwards but I'm not sure that page is up to date for 7th

  • ME_PROCESS_REQ_CUST  GL Account Number

    I am running this BADI in the IF_EX_ME_PROCESS_REQ_CUST~PROCESS_ACCOUNT interface. My code is pasted below. I am able to retrieve our the GL Account Number from the cross-reference table and move it to wa_exkn-sakto. However, SAP issues a message tha

  • B1DE For VS 2008

    Hi, Am Currently working with VS 2008 to Create Add-ons.. sofar i worked on VS 2005 and created ARD through B1DESetup_2005_1.3_VS2005 without any problem.. now what version i have to use to create ARD.. B1DE only available for Frameworks upto 2.0 onl

  • Vendor Evaluation Configuration

    Configuring the vendor evaluation there is no any subcriteria under "delivery" for early delivery that can be evaluated, I know we can create subcriteria, however I do not know how the new subcriteria will be evaluated if there is no any "scoring met

  • Wifi will not stay connected after Lion upgrade - help?

    I upgraded to Lion last week on my iMac and now my computer will not stay connected to wifi...  it is fine if I restart my computer, but then it looses its connection, settings when it goes in to screen saver mode - very frustrating, please help!