XMLA reporting performance

Hi,
I'm SAP BI newbie... have you got experience on XMLA reporting ? we have some fast qry on our BI that runs fast in BEx ... but when we call from XMLA are slowly... I don't know where slowly... but in SM21 I can see many message such as:
HTTP/RFC session has been deleted following timeout
Do you know if exist some document to tune XMLA reporting ? ... mybe it's a connection/comunication problem.... but SAP BI back-end and front-end (an MSSQL reporting service) are on the same 1GB switch.
Istance profile:
DIR_ORAHOME                                 C:\oracle\SID\102                           
rdisp/max_arq                               2000                                        
rdisp/wp_auto_restart                       86400                                       
gw/netstat_once                             0                                           
zcsa/second_language                        E                                           
login/fails_to_user_lock                    5                                           
login/password_expiration_time              90                                          
login/min_password_lng                      8                                           
rsdb/esm/buffersize_kb                      40000                                       
enque/table_size                            10000                                       
rdisp/tm_max_no                             2000                                        
dbs/ora/array_buf_size                      1000000                                     
sap/bufdir_entries                          10000                                       
icm/keep_alive_timeout                      3600                                        
gw/max_sys                                  2000                                        
rsdb/obj/max_objects                        20000                                       
rdisp/max_comm_entries                      2000                                        
gw/max_overflow_size                        25000000                                    
gw/max_conn                                 2000                                        
gw/cpic_timeout                             120                                         
rtbb/max_tables                             2000                                        
rsdb/esm/max_objects                        10000                                       
rsdb/obj/buffersize                         40000                                       
zcsa/db_max_buftab                          5000                                        
icm/host_name_full                          host.domain.com                        
SAPSYSTEMNAME                               SID                                        
SAPGLOBALHOST                               host                                      
SAPSYSTEM                                   10                                          
INSTANCE_NAME                               DVEBMGS10                                   
DIR_CT_RUN                                  $(DIR_EXE_ROOT)\$(OS_UNICODE)\NTI386        
DIR_EXECUTABLE                              $(DIR_INSTANCE)\exe                         
jstartup/trimming_properties                off                                         
jstartup/protocol                           on                                          
jstartup/vm/home                            C:\j2sdk1.4.2_13                            
jstartup/max_caches                         500
jstartup/release                            700                                                    
jstartup/instance_properties                $(jstartup/j2ee_properties);$(jstartup/sdm_properties) 
j2ee/dbdriver                               $(DIR_EXECUTABLE)\ojdbc14.jar                          
PHYS_MEMSIZE                                11264                                                  
rdisp/wp_no_dia                             12                                                     
rdisp/wp_no_btc                             8                                                      
rdisp/j2ee_start_control                    1                                                      
rdisp/j2ee_start                            1                                                      
rdisp/j2ee_libpath                          $(DIR_EXECUTABLE)                                      
exe/j2ee                                    $(DIR_EXECUTABLE)\jcontrol$(FT_EXE)                    
rdisp/j2ee_timeout                          1200                                                   
rdisp/frfc_fallback                         on                                                     
icm/HTTP/j2ee_0                             PREFIX=/,HOST=localhost,CONN=0-500,PORT=5$$00          
icm/server_port_0                           PROT=HTTP,PORT=80$$,TIMEOUT=30,PROCTIMEOUT=300         
rdisp/wp_no_vb                              2                                                      
rdisp/wp_no_vb2                             0                                                      
rdisp/wp_no_spo                             1                                                      
DIR_CLIENT_ORAHOME                          $(DIR_EXECUTABLE)                                      
DIR_TRANS                                  
host\SAPMNT\trans                                
j2ee/instance_id                            ID1056317                                              
abap/buffersize                             400000                                                 
zcsa/table_buffer_area                      50000896                                               
rtbb/buffer_length                          30000                                                  
rsdb/cua/buffersize                         5000                                                   
zcsa/presentation_buffer_area               10000384                                               
rdisp/wp_no_enq                             2                                                      
rdisp/appc_ca_blk_no                        500                                                    
rdisp/wp_ca_blk_no                          500                                                    
rsdb/ntab/entrycount                        29970                                                  
rsdb/ntab/ftabsize                          30010                                                  
rsdb/ntab/irbdsize                          6002                                                   
rsdb/ntab/sntabsize                         3000                                                   
DIR_ROLL                                    C:\usr\sap\SID\DVEBMGS10\data                          
DIR_PAGING                                  C:\usr\sap\SID\DVEBMGS10\data                          
DIR_DATA                                    C:\usr\sap\SID\DVEBMGS10\data                          
DIR_REORG                                   C:\usr\sap\SID\DVEBMGS10\data  
DIR_TEMP                                    C:\tmp                         
DIR_SORTTMP                                 C:\usr\sap\SID\DVEBMGS10\data  
zcsa/system_language                        E                              
zcsa/installed_languages                    NEFDI                          
install/codepage/appl_server                1100                           
abap/use_paging                             0                              
ztta/roll_first                             1024                           
ztta/roll_area                              2000896                        
rdisp/ROLL_SHM                              7552                           
rdisp/ROLL_MAXFS                            7552                           
rdisp/PG_SHM                                4096                           
rdisp/PG_MAXFS                              32768                          
abap/heap_area_dia                          2000683008                     
abap/heap_area_nondia                       2000683008                     
abap/heap_area_total                        2000683008                     
abap/heaplimit                              40894464                       
abap/swap_reserve                           20971520                       
ztta/roll_extension                         2000683008                     
em/blocksize_KB                             4096                           
em/stat_log_size_MB                         20                             
em/stat_log_timeout                         0                              
rdisp/max_wprun_time                        3600                           
rdisp/plugin_auto_logout                    3600                           
em/initial_size_MB                          8192                           
em/address_space_MB                         512                            
Regards.
Ganimede Dignan.

Hi,
http://img216.imageshack.us/img216/9350/98588629zn7.jpg
http://img216.imageshack.us/img216/5497/91958211bj1.jpg
http://img216.imageshack.us/img216/7618/15615500jk3.jpg
Have you got any advice ?
Regards.

Similar Messages

  • Report Performance - timeout short dump

    Hello Experts,
    i am trying to improve the performace of a report that was developed long time ago.
    Issues i found:
    1. The report has many select...Endselect combinations, and selects inside the loop statements.
    2. Most of the selects have the addition 'into corresponding fields of' for selecting a few fields, without  the table addition.
    3.  Also few selects have the 'select * from'  syntax.
    data: begin of itab occurs 0,
              f1,
              f2
              f3.....
              fn,          
            end of itab.
    Ex: loop at itab.
             select f1 f2 f3 from table1
                   into corresponding fields of itab1.
               collect itab1.
             endselect.
              select f4 f5 from table2
                  into corresponding fields of itab2.
               endselect.
          endloop.
    All this leeds to performace issues.
    i have checked ST05, and i have got the details of the error.
    My question is which one of the reasons i mentioned above are a major factor in delaying the report performance?
    Which one of the above should i conetrate first to get the long runtime down? My goal is to keep my changes to the minimum and improve the performance. Please advise.

    > My question is which one of the reasons i mentioned above are a major factor in delaying the report
    > performance?
    Don't ask people for guesses, if you can see the facts!
    Run the SQL Trace several times, and use go to 'Trace List' -> 'Summarize Trace by SQL Statement'
    => Shows you total DB time and time per statement (all executions), the problems are on top of the list.
    Check ABAP, detail, and explain!
    Read more here:
    /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
    Siegfried

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

  • Report Performance for GL item level report.

    Hi All,
    I have a requirements to get GL line items
    report based on GL Line items so have created data model like 0FI_GL_4->DSO-> cube and tested everything is fine but when execute in production the report performance is very bad.
    Report contains document number, GL act, comp.code, posting date objects.
    I have decided to do as follows to improve reporting performance
    ·         Create Aggregate on Document, GL characteristic
    ·         Compression.
    Can I do aggregates 1st then do the compression.
    Please let me know if I missing out anything.
    Regards,
    Naani.

    Hi Naani,
    First fill the Aggrigates then do Compression,run SAP_INFOCUBE_DESIGN Check the size of Dimension maintain Line item, High cordinality to the dimension, Set Cahe for query in RSRT,
    Try to reduce Novigational Attr in report. Below document may help you.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6071ed5f-1057-2e10-deb6-d3426fec0219?QuickLink=index&…
    Regards,
    Jagadeesh

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • Item/Drill Report Performance hinderance

    I am having a problem with report performance. I have a report that I have to have 5 drop down menus on top of the report. It seems the more drop down menus I add, the slower the response time when the report is actually navigated. One of the drop downs has over 1,000 options, but the other 4 drop down menus have 4-5 options. Is there a way to improve performance?

    And this is different from yesterday how?
    Please help, Discoverer Performance.
    Russ provided a few possible reasons, and asked for a bit of detail. Instead of asking the same question again, respond to Russ and others, and provide a bit more information to how things are set up.

  • Crystal Report Performance for dbf files.

    We have a report which was designed 5 -6 years ago. This report has 4 linked word doc and dbf file as datasource. This report also as 3 subreports. The size of field in dbf is 80 chars and couple of field are memo field. The report performance was excellent before we migrated the crystall report to 2008. After CR2008 the system changed and it is suddenly really slow. We have not change our reports so much it should have an influence on performance. When the user presses the preview button on printing tool window the control is transferred to Crystal. Something has happened inside black box of Crystal ( IMO ).   the dll we have are crdb_p2bxbse.dll 12.00.0000.0549 . The issues seems to be of xbase driver (not possible to use latest version of crdb_p2bxbse.dll and dbase files with memo fields).

    Hi Kamlesh,
    Odd that the word doc is opened before the RPT, I would think that the RPT would need to be opened first so it sees that the doc also needs to be opened. Once it's been loaded then the connection can be closed, CR embeds the DOC in the RPT so the original is no longer required.
    Also, you should upgrade to Service Pack 3, it appears you are still using the original release. SP1 is required first but then you should be able to skip SP2 and install SP3.
    You did not say what earlier version of Cr you were using? After CR 8.5 we went to full UNICODE support at which time they completely re-built the report designer and removed the database engines from the EXE and made them separate dll's now. OLE objecting also changed, you can use a formula and database field to point to linked objects now so they can be refreshed any time. Previously they were only refreshed when the report was opened.
    You may want to see if linking them using a database field would speed up the process. Other than that I can't suggest anything else as a work around.
    Thank you
    Don

  • 2004s Web report performance is not good ,though that of 3x web is OK.

    Hi,
    I feel 2004s Web report performance is bad, though that of 3x web is no problem (the same query is used.) it is worse than BEx analyzer.
    This query has more than 1,000 records and those queries that have many records result in the same bad performance.
    Of course there are many reason for this bad performance, please tell me your solution by which you solve the same problem like this.
    the SIDs of EP and BI is difference here.
    CPU is not consumed when 2004s web report is executed.
    And I have cancelled  virus scan to this web report...
    Kind regards,
    Masaaki

    It is bad, am sure it's down to the new .net and java based technology.  Aggregates are a way forwards though from what i've heard of the BI Accelerator this is the real way forwards.

  • Bex Report Performance

    Dear Friends,
    I would like to know is the complex authorizations can also cause the Bex report performance.
    One of my scenerio is like there are two users A & B
    A is having relevant authorizations for reporting, Drill down etc which are required.
    B is having SAP All authorization.
    When the same report has been executed by both users on the same system.
    the data retrieved by user B(SAP_ALL authorization) is quite faster than User A.
    Its like ther diffference of about 10 minutes.
    There are some exsclude selections in report.
    So my conclusion is like the complex authorizations do also hampers the query performance.
    Please confirm & share your views.
    Thanks & Best Regards,
    Vivek Tripathi
    +91-9372313000

    Hi Vivek
         Can you help us understand what was the exact problem and how you resolved it / solution at Extraction / Modeling / Reporting end.
         I have a quite similiar issue with my report i have Header + Item report on Infoset
    u2022     Header report takes seconds and item report takes minutes
    u2022     The same report executed with exact parameter has inconsistent performance results meaning one time it takes 1 minutes next time same report same user and same authorization takes 5 minutes.
        Any help on this would be really greatfull. Suspecting is not an issue with the report at all , as no changes happened between the pre and post check.
    _Additional Information : _
    We Create Secondary -Bitmap index every week end i do not see that is one of the route cause.
    Except that we have our regular daily loads that are running for master data loads and transaction data loads in series.
       Thanks in Advance.
    Much Regards
    Jagadish Thirumalachetty.
    Edited by: Jagadish Thirumalachetty on Jul 14, 2010 1:35 PM

  • BW Report Performance, Accuracy of Data

    Hi,
    Can someone help give explanations to following questions :
    1.) Does BW Report show how current is my data?
    2.) What are the reason why the performance of my BW Report is slow?
    3.) What are the reason why my BW Report is have missing data?
    4.) Why is my BW Report have incorrect data?
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Thanks,
    Milind
    Locked - duplicate post and very generic questions
    Report performance and data quality
    Edited by: Arun Varadarajan on Apr 9, 2010 2:07 AM

    Hi,
    1) Does BW Report show how current is my data?
    Yes, Last refresh of your data stat in query Properties.Run report and check the details for last refresh.
    2.) What are the reason why the performance of my BW Report is slow?
    Reason could be:
    Poor Design
    Business Logic (Transformations)
    Nav attributes used in the reports
    Time dependent MD
    Aggregates missing
    Data Vol in the Cubes or DSO's
    http://wiki.sdn.sap.com/wiki/display/BI/SomeusefulT-CodeforBIperformancetuning
    3.) What are the reason why my BW Report is have missing data?
      Check the source system data and check mapping in transformation with all the business logic.
    4.) Why is my BW Report have incorrect data?
    Depedns if you are loading from FF or R/3 or your are cleansing the data once it enters in to BW.
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Check the Source system data in RSA3 and pick one Document and run the same document in BI.
    Thanks!
    @AK

  • FRS report performance issue

    Hello,
    We have a report developed in FRS in the below style.
    http://postimg.org/image/bn9dt630h/b9c2053d/
    Basically, all the dimensions are asked in POV. In the rows of the reports, we have two sparse dimensions that are drilled down to level 0 as shows in above report. The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages. We waited for 45 minutes and then had to cancel a report, when the same was run in local currency, it gave us our results in 30 seconds.
    My thinking is that there should be a better way of showing level 0 members than using "Descendants of Current Point of View for Total_Entity AND System-defined member list Lev0,Entity" as I presume what it does is get descendants as well as level0 members and then compare them. I have alternate hierarchies hence I am using this, isn't there a simple way of saying - just give me level 0 members of the member selected in POV ?
    I have used below parameters
    Connection - Essbase
    Suppress rows on Database connection server
    Regards,

    Hello,
    >> The report works fine when run in local currency (Local currency is a stored member). When the report runs in a different currency (dynamic member) then it keeps on running for ages.
    You are focusing on the report. The most likely reason is in the performance of the database. Ofcourse, you can reduce the query size and get your report performing again, but the root cause is likely the database design.
    I do not know a function to drill down to the level0 members of the selected POV member.
    If this is something different per user, then you might think about meta-read filters. They would remove all that is not granted.
    Regards,
    Philip Hulsebosch

  • About report performance

    Hi Friends,
    I created a report with 45 ref.cursors,
    All ref.cursors are in a Package,
    The package is in Database side.
    The report is report server.
    IF i start to run the report through application
    the report is taking 50% of cpu memory around 40 seconds.
    is this report performance problem ?
    if i have more ref.cursors in report
    is there any problem in report performance ?
    Can somebody help me ?

    One performance consideration I'd do is try to avoid multiple similar queries or even repeats of the same query.
    Is
    from invoice
    where trunc(invoice_date) between :date1 and :date2
    and currency_code = '$' -- sometimes 'euro' and so no
    and ISSUE_PLACE = 'xx'
    and investor_code = :investor_code;
    return(v_comm*5.5137);
    in main query? Can those Formulas be included/replaced into the main query? Are appropriate Indexes created for the joins?

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • BI 7 Report performance slow

    Hi All.
    In BI 7, sales delivery report performance is prety slow. Previously it took 5 mins to execute the report, but now a days it is taking morethan 30 mins. Ithe prob is OLAP time.
    Vasu

    Hi,
    Please run the query in RSRT>Execute and Debug>Display Statistics Data.
    and also search there are lots of useful material available in the forum for query performance
    Check these threads:
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/70f4bb1ffb591ae10000000a1553f7/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0e6a6e3-0601-0010-e6bd-ede96db89ec7
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Thanks-RK

Maybe you are looking for

  • Error while printing smart form..

    Hi all, I am getting an error while printing the invoice(smartform) "Error in spool C call: Error from TemSe..." it gives one pop also saying... Work process restarted; Session terminated. How to solve it? Thanks, Darshana

  • MRP based pricing

    Dear All, I have a pricing scenario wherein  we have two prices: 1. Price condition type for MRP price .We need to deduct Excise abatement from MRP and calculate BED ECESS and SHE from MRP.(Internal use) 2. Price condition type for Distributor price

  • XI to post data via HTTPS

    Hi ! There is a government web page with an HTML page that has a FORM tag, with this attributes (action data is not real) action ="https://www.website.com/forms.do" enctype="multipart/form-data" method="post" name="form" inside, it has some INPUT tag

  • Receive Flash Islands data in flex-Dictionary ?

    Hi@ all, i´m developing a bigger Flash Islands application i need to loop over ~10 000 entrys to find a searched object. It is now realized inside an ArrayCollection, like all examples i found. Is there a faster way to solve the iterating problem? i´

  • OU with groups as resource

    I have an Active Directory within this AD there is an OU which holds groups. Active Directory | |---Groups OU       |---GroupA       |---GroupB       |---GroupC       |---GroupD       |---GroupE This AD is linked to all the user in IdM. I can easily