Poor performance with WebI and BW hierarchy drill-down...

Hi
We are currently implementing a large HR solution with BW as backend
and WebI and Xcelcius as frontend. As part of this we are experiencing
very poor performance when doing drill-down in WebI on a BW hierarchy.
In general we are experiencing ok performance during selection of data
and traditional WebI filtering - however when using the BW hierarchy
for navigation within WebI, response times are significantly increasing.
The general solution setup are as follows:
1) Business Content version of the personnel administration
infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
2) Multiprovider to act as semantic Data Mart layer in BW.
3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
All key figure restrictions and calculations are done in this Data Mart
Query.
4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
calculations etc. are done in the universe.
5) WebI report with limited objects included in the WebI query.
As we are aware that performance is an very subjective issues we have
created several case scenarios with different dataset sizes, various
filter criteria's and modeling techniques in BW.
Furthermore we have tried to apply various traditional BW performance
tuning techniques including aggregates, physical partitioning and pre-
calculation - all without any luck (pre-calculation doesn't seem to
work at all as WebI apparently isn't using the BW OLAP cache).
In general the best result we can get is with a completely stripped WebI report without any variables etc.
and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
each navigational step (when using drill-down on Organizational Unit
hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
navigational step.
That is each navigational step takes 15-20 seconds
with only 1000 records in the WebI cache when using drill-down on org.
unit hierachy !!.
Running the same Bex query from Bex Analyzer with a full dataset of
30.000 records on lowest level of detail returns a threshold of 1-2
seconds pr. navigational step thus eliminating that this should be a BW
modeling issue.
As our productive scenario obviously involves a far larger dataset as
well as separate data from CATS and PT infoproviders we are very
worried if we will ever be able to utilize hierarchy drill-down from
WebI ?.
The question is as such if there are any known performance issues
related to the use of BW hierarchy drill-down from WebI and if so are
there any ways to get around them ?.
As an alternative we are currently considering changing our reporting
strategy by creating several higher aggregated reports to avoid
hierarchy navigation at all. However we still need to support specific
division and their need to navigate the WebI dataset without
limitations which makes this issue critical.
Hope that you are able to help.
Thanks in advance
/Frank
Edited by: Mads Frank on Feb 1, 2010 9:41 PM

Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
Actions
suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
tick use structure elements in RSRT: Done it.
enable query stripping in WebI: Done it.
upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
use more runtime query filters. : Not possible. Very simple query.
Others:
RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
Uncheck prelimirary Hierarchy presentation in Query. only selected.
Check "Use query drill" in webi properties.
Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
Thanks a lot
J.Casas

Similar Messages

  • Poor Performance with Webi on top of BW - Large Navigational Attributes

    We have recently developed a reporting model based on having a BW cube of approx 20 million records. This has 2 very large line item dimensions (one 14 million records, on 6 million records) that both have navigational attributes stored against them.
    We need the design to be like this because the attributes change monthly and we do not wish to do a complete drop and reload of cube data each month (this takes 10-12 hours).
    When we build a Universe on top of the cube and try Webi reporting the performance is dreadful - many reports time out completely. This happens whether or not the large line item dimensions are selected in the query. The BW query is very simple just 2 key figures and approx 30 Navigational Attributes as default values. No filters are applied.
    For example a query that just contains Calander Year and one key figure either times out after 10 minutes or fails.
    Our source of the data is Oracle tables and the same query runs in 2 1/2 minutes and the query runs in RSRT in approx 5 minutes with no problems so it is definately a problem with BOBJ to MDX to BW.
    We are on BW7.01 SP5 and XI3.1 SP2. SAP have recommend going to SP3 and using query stripping to help with this but I doubt if this will work.
    We have tried building aggregates, splitting into yearly cubes etc but nothing seems to help.
    My question is do the large navigational attributes mean Webi simply can't cope with this?
    Thanks

    Thanks for the suggestions.
    Although this did not directly help I've now found a solution.
    We had a calculated key figure in the query that doing a count of records on a huge navigational attribute. This was causing all queries to run very slowly (even if the key figure wasn't selected in Webi).
    Removing it solved the problem. Will have to find another way to  do the count key figure.
    Thanks

  • Poor performance with Yosemite and early 2009 Mac Pro

    I have an early 2009 Mac Pro with the following specs:
    - 2.66 GHz Quad Core Intel Xeon
    - 10 GB of 1066 MHz RAM
    - NVidia GeForce GT 120 512 MB
    - 256 MB solid state drive for my system partition
    - Two monitors connected, each at 1680x1050 resolution
    Back when I was running OS X 10.7 or 10.8 I found that for every day tasks the performance using my computer was adequate. However, starting around 10.9, and even worse since upgrading to 10.10, thinks have gotten painfully slow. To give an example, activating Mission Control can take upwards of four seconds, with the animation being very choppy. Changing tabs on a Finder window can take two seconds for the switch to happen. Just switching between different windows, it can take several seconds for a window to activate. It's gotten to the point where I'm having difficulty working. So I'm thinking of upgrading some of my hardware.
    Given my specs the weakest link seems to be my graphics card, and all of these issues do seem to be related to graphics. So my questions are:
    - Do you think upgrading my graphics card will substantially improve things, and is there anything else I should upgrade?
    - Is this slowness just the result of the computer being nearly six years old, and no upgrades will really improve things that much?
    Thanks in advance!

    Between Setup Assistant, and your existing system "untouched" (or use CCC if say you want to use an existing SSD for the system) there is no reason it should be a lot of work setting up. Have you ever used Migration or Setup? it has also gotten better.
    Also, having 10.9.5 on another drive and running DU - and TRIM _now_ would be helpful.
    Looking at just what gremlins you have running around inside your current system is not bad but.... sometimes the "long road" turns a shortcut into a dead end, and avoiding doing what seems the long road and hardest gets you where you want to go: a solid stable system.
    Less is more. Most systems have more than needed and they get in the way and can cause trouble. Even handy "widgets" and those things that monitor system functions, even disk status. Which is why I like seeing a separate small system maintenance volume just for the weekly checkup. 30GB is more than enough so just slice out a partition somewhere - on another drive/device.
    Those things, more so than and a lot cheaper than a new GPU. If  your SSD is two years old, the 840 EVO from Samsung is down under $120 for 250GB, or use one for Lightroom / Aperture / iPhoto or scratch.
    One person was complaining about sluggish window issue and thought it was the driver. Turned out It happened in ONE APP, not everywhere - very telling - and the app in question needs update. Adobe updated CC (for Windows) last month to finally support dual Dxx and some of the newer AMD GPUs - can the mac be far behind?
    10GB RAM? that would not be 3 x 4GB or any combination using triple channel memory.

  • Poor Performance with Fairpoint DSL

    I started using Verizon DSL for my internet connection and had no problems. When Fairpoint Communications purchased Verizon (this is in Vermont), they took over the DSL (about May 2009). Since then, I have had very poor performance with all applications as soon as I start a browser. The performance problems occur regardless of the browser - I've tried Firefox (3.5.4), Safari (4.0.3) and Opera (10.0). I've been around and around with Fairpoint for 6 months with no resolution. I have not changed any software or hardware on my Mac during that time, except for updating the browsers and Apple updates to the OS, iTunes, etc. The performance problems continued right through these updates. I've run tests to check my internet speed and get times of 2.76Mbps (download) and 0.58Mbps (upload) which are within the specified limits for the DSL service. My Mac is a 2GHz PowerPC G5 runnning OSX 10.4.11. It has 512MB DDR SDRAM. I use a Westell Model 6100 modem for the DSL provided by Verizon.
    Some of the specific problems I see are:
    1. very long waits of more than a minute after a click on an item in the menu bar
    2. very long waits of more than two minutes after a click on an item on a browser page
    3. frequent pinwheels in response to a click on a menu item/browser page item
    4. frequent pinwheels if I just move the mouse without a click
    5. frequent messages for stopped/unresponsive scripts
    6. videos (like YouTube) stop frequently for no reason; after several minutes, I'll get a little audio but no new video; eventually after several more minutes it will get going again (both video and audio)
    7. response in non-browser applications is also very slow
    8. sometimes will get no response at all to a mouse click
    9. trying to run more than one browser at a time will bring the Mac to its knees
    10. browser pages frequently take several minutes to load
    These are just some of the problems I have.
    These problems all go away and everything runs fine as soon as I stop the browser. If I start the browser, they immediately surface again. I've trying clearing the cache, etc with no improvements.
    What I would like to do is find a way to determine if the problem is in my Mac or with the Fairpoint service. Since I had no problems with Verizon and have made no changes to my Mac, I really suspect the problem lies with Fairpoint. Can anyone help me out? Thanks.

    1) Another thing that you could try it is deleting the preference files for networking. Mac OS will regenerate these files. You would then need to reconfigure your network settings.
    The list of files comes from Mac OS X 10.4.
    http://discussions.apple.com/message.jspa?messageID=8185915#8185915
    http://discussions.apple.com/message.jspa?messageID=10718694#10718694
    2) I think it is time to do a clean install of your system.
    3) It's either the software or an intermittent hardware problem.
    If money isn't an issue, I suggest an external harddrive for re-installing Mac OS.
    You need an external Firewire drive to boot a PowerPC Mac computer.
    I recommend you do a google search on any external harddrive you are looking at.
    I bought a low cost external drive enclosure. When I started having trouble with it, I did a google search and found a lot of complaints about the drive enclosure. I ended up buying a new drive enclosure. On my second go around, I decided to buy a drive enclosure with a good history of working with Macs. The chip set seems to be the key ingredient. The Oxford line of chips seems to be good. I got the Oxford 911.
    The latest the hard drive enclosures support the newer serial ata drives. The drive and closure that I list supports only older parallel ata.
    Has everything interface:
    FireWire 800/400 + USB2, + eSATA 'Quad Interface'
    save a little money interface:
    FireWire 400 + USB 2.0
    This web page lists both external harddrive types. You may need to scroll to the right to see both.
    http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATAFW800_FW400USB
    Here is an external hd enclosure.
    http://eshop.macsales.com/item/Other%20World%20Computing/MEFW91UAL1K/
    Here is what one contributor recommended:
    http://discussions.apple.com/message.jspa?messageID=10452917#10452917
    Folks in these Mac forums recommend LaCie, OWC or G-Tech.
    Here is a list of recommended drives:
    http://discussions.apple.com/thread.jspa?messageID=5564509#5564509
    FireWire compared to USB. You will find that FireWire 400 is faster than USB 2.0 when used for a external harddrive connection.
    http://en.wikipedia.org/wiki/UniversalSerial_Bus#USB_compared_toFireWire
    http://www23.tomshardware.com/storageexternal.html

  • Non jdriver poor performance with oracle cluster

    Hi,
    we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
    Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
    So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
    I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
    Thanx for help!!
    Message was edited by mindchild at Jan 27, 2005 10:50 AM
    Message was edited by mindchild at Jan 27, 2005 10:51 AM

    Thx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
    I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
    Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
    WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
    WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
    Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
    TIA
    >
    >
    Markus Schaeffer wrote:
    Hi,
    we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
    Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
    as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
    database .. and it worked again with all thick orthin drivers.
    So .. new Oracle drivers with oracle cluster
    result in bad performance, but with
    Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
    else. I would try the latest
    10g driver if it will work with your DBMS version.
    It's much faster than any 9.X
    thin driver.
    Joe
    I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
    tried with every JDBC possibility!
    Thanx for help!!

  • Poor performance with Oracle Spatial when spatial query invoked remotely

    Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
    Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
    Thank you in advance for any thoughts you might share.

    OK, that's clearer.
    Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
    set autotrace on
    set timing on
    SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
    Have you profiled the procedure? Here is an example of how to do it:
    Prompt Firstly, create PL/SQL profiler table
    @$ORACLE_HOME/rdbms/admin/proftab.sql
    Prompt Secondly, use the profiler to gather stats on execution characteristics
    DECLARE
      l_run_num PLS_INTEGER := 1;
      l_max_num PLS_INTEGER := 1;
      v_geom    mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
    BEGIN
      dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE'));  -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
      v_geom := Parallel(v_geom,10,0.05,1);  -- Put your procedure call here
      dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
    END;
    SHOW ERRORS
    Prompt Finally, report activity
    COLUMN runid FORMAT 99999
    COLUMN run_comment FORMAT A40
    SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
      FROM plsql_profiler_runs
      ORDER BY runid;
    COLUMN runid       FORMAT 99999
    COLUMN unit_number FORMAT 99999
    COLUMN unit_type   FORMAT A20
    COLUMN unit_owner  FORMAT A20
    COLUMN text        FORMAT A100
    compute sum label 'Total_Time' of total_time on runid
    break on runid skip 1
    set linesize 200
    SELECT u.runid || ',' ||
           u.unit_name,
           d.line#,
           d.total_occur,
           d.total_time,
           text
    FROM   plsql_profiler_units u
           JOIN plsql_profiler_data d ON u.runid = d.runid
                                         AND
                                         u.unit_number = d.unit_number
           JOIN all_source als ON ( als.owner = 'CODESYS'
                                   AND als.type = u.unit_type
                                   AND als.name = u.unit_name
                                AND als.line = d.line# )
    WHERE  u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
    ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
    regards
    Simon

  • Dblink poor performance with varchar(4000) after upgrade 11g

    Hi
    Since a long time we connected from a 10g database via dblink to another 10g database to copy a simple table with four columns. Tablesize at about 500MB
    column1 (varchar(100))
    column2/3/4 (varchar(4000)).
    After the upgrade of the source database to 11g the dblink performance is poor with the big varchar columns. If I copy (select column1 from ) only column1 then I get the data within minutes. If I want to copy the whole table (select column1, column2, column3, column4 from...) then the performance is poor. It didn't finish within days.
    Does anyone know about dblink issues with 11g and big varchar columns?
    Thank you very much

    Use DBlink to pull data from table(s) using IMPDP:
    #1 Create DBlink in the Target database
    create database link sourceDB1_DBLINK connect to system identified by password10 using 'SourceDB1';
    [update tnsnames.ora appropriately]
    #2
    select DIRECTORY_PATH from dba_directories where DIRECTORY_NAME=’DATA_PUMP_DIR’;
    create DATA_PUMP_DIR directory if not exists
    #3
    impdp system/pwdxxx SCHEMAS=SCOTT NETWORK_LINK=ORCL10R2 JOB_NAME=scott_import LOGFILE=data_pump_dir:network_imp_scott.log
    That's all

  • Adobe Flex's performances with Web

    Dear Flex Experts.
       Good morning! How are you? I am SAP consultant,, I have preety big application( not on SAP platform but with PHP ZEND platform ) to develop with table UI, Google Maps, and other mighty visualisation UIs. I love Flex to work with. My most work is from SAP side so dont know possible performance issues with PHP and Zend .  My friends are telling not to use Flex as it will create performance issues in Web page.  I dont want to go with any other technology like JQuery or something. Is it really that much performance problem can face anyone ? if any one help me to understand How can I improve the performance of FLEX RIA, it will be great help of mine. I will appreciate any answer.
    Thanking you
    Regards
    Naeem

    How could such high profile Flex applications as Morgan Stanley Matrix be built if Flex was such a slouch ?
    http://www.morganstanley.com/matrixinfo/
    The answer is that a lot of people's talk is due to ignorance:
    - In order to create a Flex application that performs well, know both the innards of the Flex framework and don't work against the underlying Flash Player. A lot of people try and rewrite things that exist in the Flex framework or work against the Flash Player.
    - A lot of people will talk about Silverlight/.NET because it has multithreading and Flex does not. But multithreading used unwisely can lead to performance problems. A lot of operations in Flex are asynchronous and use a callback mechanism.
    - You can use Time Slicing for data heavy processing but best practice is to reduce the amount of data loaded and processed at any one time (e.g using paging): http://cookbooks.adobe.com/post_Time_slicing-18691.html
    - Flash Player 11 will bring Stage 3D for GPU acceleration as well as possibly multithreading

  • Webi and BW Hierarchy-Promts Error WIS 10901

    Hi together,
    I'm using Webi in Combination with a SAP BW Query. BO is 3.1 with FP 2.9, on SAP-site it's SP07.
    I've got an infoobject with a hierarchy. In the Query I've got a hierarchy-variable on this hierarchy.
    The variable is optional.
    Using Bex-Analyzer I can use the variable to select nodes from the hierarchy and it works fine.
    I've got a universe on the Query and want to create a Webi on this universe.
    When prompting the variable in webi and trying to refresh the LOV I get an Error WIS 10901.
    Has anyone an idea how to solve this problem?
    Kind regards!
    Lars

    HI,
    yes there's a keyfigure in this report.
    The complete message is: A database error occured. The database error text is: .(WIS 10901).
    Kind regards
    Lars
    Edited by: Lars Ohm on Sep 30, 2010 1:38 PM

  • Sun Java Application Server Performance with Web Services

    We are running a web service on SJAS (Standard Edition). Load testing under 1 or 2 users works alright, but with 5 users making concurrent web service calls it immediately crashes the domain. Has anyone else run into performance issues when using SJAS with Web Services? Are there any configuration parameters we need to setup to handle more than 1 connection?
    I'm sure that the application server can handle many concurrent connections, just not sure how to configure and where to look for more information. If anyone is aware of actual numbers for load testing, that would be great information.
    Thanks,
    Dawson

    Hi Dawson,
    SJSAS can definetly handle more users. Can you please tell us, what version of SJSAS and web services (jax-rpc 1.1 or jax-ws 2.0?) implementation you are using? When you say it crashes the domain, do you see any error messages in the server.log?

  • Web Intelligate Summer Report drill down and drill up problem

    I use summary table and @aggreage_aware function to design a universy.  I want WI report shows highest level summary table data first, then drill down to next high summary table data.  But when I drill up back,  it still shows be next high summary table data.  I don't use the report to do any calculations.  I just want it shows different level data.
    Detail:
    I have three tables:
    1.  Month_Data table:  - Base fact data table.
    Year
    Quarter
    Month
    Department ID
    Supplier ID
    Score          -- Measure type
    2.  Quarter_Data table: - High level summary table.
    Year
    Quarter
    Department ID
    Supplier ID
    Score          -- Measure type
    3.  Year_Data table:  - Highest level summary table.
    Year
    Department ID
    Supplier ID
    Score          -- Measure type
    I like to create a web intelligence report that shows data from year_data table:
    Year          Score
    2008          105
    2007          99
    2006          90
    If I drill one level down, It will show data from quarter_data table:
    Year (example 2007)
    Quarter1     Score
    Quarter2     Score
    Quarter3     Score
    Quarter4     Score
    If two level drill down, it will show data from month_data table:
    Quarter1
         Month1          Score
         Month2          Score
         u2026
         Month12                     Score.
    No calculation needed on the report.
    I will use follow steps to create a universe:
    1.  Three tables will be three classes.  The classes order is:
         Year_Data class;  Quarter_Data class; Month_Data class.
    2.  When I create universe, I join:
    Year_Data  class to   Quarter_data class:
    Year to year                 1 to n               
    Department ID to Department ID          1 to n
    Supplier ID to Supplier ID          1 to n
    Quarter_Data class to Month_Data class:
    Year to Year               1 to n
    Quarter to quarter               1 to n
    Department ID to Department ID          1 to n
    Supplier ID to Supplier ID          1 to n
    3.  Create a filter class with object Department ID and supplier ID.
    4.  I create two aggregate aware classes:
    Agg_dimension class:
         Year:    aggregate_aware(year_data.year, quarter_data.year, month_data.year)
         quarter: aggregate_aware(quarter_data.quarter, month_data.quarter)
         Month:     aggregate_aware(month_data.month); 
    Agg_Measure class:
         Score: aggregate_aware(year_data.score, quarter_data.score, month_data.score)
    5.  I may re-define year, quarter, month and score as aggregate_aware function for objects in all classes:
         Year:     aggregate_aware(year_data.year, quarter_data.year, month_data.year)
         quarter: aggregate_aware(quarter_data.quarter, month_data.quarter)
         Month:     aggregate_aware(month_data.month); 
         Score:   aggregate_aware(year_data.score, quarter_data.score, month_data.score)
    6.  create user hierarchies: 
    agg_dimension.Year
    agg_dimension.quarter
    agg_dimension.month 
    7.  Figure out Aggregate Navigation  -- it is easy.
    Then create a report.  I put follow objects on report:
    Agg_dimension.year          agg_measure.score
    When I run report, it shows right data:
    Year_data.year          Year_data.score
    On the report. I drill down on year, it shows right data.
    Quarter_data.Year:
    Quarter_data.quarter1                quarter_data.score
    Quarter_data.quarter2                quarter_data.score
    Quarter_data.quarter3                quarter_data.score
    Quarter_data.quarter4                quarter_data.score
    But when drill it up back on quarter, it shows:
    Quarter_data.year     quarter_data.score1
    Quarter_data.year     quarter_data.score2
    Quarter_data.year     quarter_data.score3
    Quarter_data.year     quarter_data.score4
    Not
    Agg_data.year          agg_data.score.
    It has drill up back problem on my report.
    Please help.
    Thanks
    Frank Han

    and keyfigures are coming frm 2lis_03_bf..But when i drill down on purchase organization, purchase grou , vwndor account group and planning group then # is coming instead of value....i dont the reason for this..
    Did you chk if you have vendor, purchase org, all this coming from 2LIS_03_BF??
    Can you chk in listcube transaction what is the output by putting the same restrictions as in the query?
    Edited by: mansi dandavate on Jul 10, 2009 11:37 AM

  • Poor performance of web dynpro application

    Hi,
    I have developed web dynpro application which fetches data from R3 system using JCO connection.Large amount of data is transferred between R3 and application,because of which it is taking too long for displaying result.
    After displaying timestamp before and after RFC execution code I came to know that, it is taking approx. 5 min for RFC execution, resulting in poor performance.Time taken for rest of processing is negligible.Is there any way by which I can reduce time for RFC execution or data transfer?
    Thanks in advance,
    Apurva

    HI Apurva,
    I think u r displaying the whole data at a stretch in the front end. So it will take some time for rendring. So try to reduce the display elements (For example, for tables, display only 10 rows at a time).
    regards
    Fahad Hamsa

  • Poor Performance with Converged Fabrics

    Hi Guys,
    I'm having some serious performance issues with Converged Fabrics in my Windows Server 2012 R2 lab. I'm planning on creating a Hyper-V cluster with 3 nodes. I've built the first node, building and installing/configuring OS and Hyper-V pretty straight forward.
    My issue is with Converged Fabrics, I'm absolutely getting very slow performance in the sense of managing the OS, Remote Desktop connections taking very long and eventually times out. Server unable to find a writable domain controller due to slow performance.
    If I remove the converged fabric everything is awesome, works as expected. Please note that the cluster hasn't even been built yet and experiencing this poor performance.
    Here is my server configuration:
    OS: Windows Server 2012 R2
    RAM: 64GB
    Processor: Intel I7 Gen 3
    NICS: 2 X Intel I350-T2 Adapters, supporting SRIOV/VMQ
    Updates: All the latest updates applied
    Storage:
    Windows Server 2012 R2 Storage Spaces
    Synology DS1813+
    Updates: All the latest updates applied
    Below is the script I've written to automate the entire process.
    # Script: Configure Hyper-V
    # Version: 1.0.2
    # Description: Configures the Hyper-V Virtual Switch and
    #              Creates a Converged Fabric
    # Version 1.0.0: Initial Script
    # Version 1.0.1: Added the creation of SrIOV based VM Switches
    # Version 1.0.2: Added parameters to give the NLB a name, as well as the Hyper-V Switch
    param
        [Parameter(Mandatory=$true)]
        [string]$TeamAdapterName="",
        [Parameter(Mandatory=$true)]
        [string]$SwitchName="",
        [Parameter(Mandatory=$true)]
        [bool]$SrIOV=$false
    #Variables
    $CurrentDate = Get-Date -Format d
    $LogPath = "C:\CreateConvergedNetworkLog.txt"
    $ManagmentOSIPv4="10.150.250.5"
    $ManagmentOS2IPv4="10.250.251.5"
    #$CommanGatewayIPv4="10.10.11.254"
    $ManagmentDNS1="10.150.250.1"
    $ManagmentDNS2="10.150.250.3"
    $ManagmentDNS3="10.250.251.1"
    $ManagmentDNS4="10.250.251.3"
    $ClusterIPv4="10.253.251.1"
    $LiveMigrationIPv4="10.253.250.1"
    $CSVIPv4="10.100.250.1"
    $CSV2IPv4="10.250.100.1"
    #Set Excution Policy
    Write-Host "Setting policy settings..."
    Set-ExecutionPolicy UnRestricted
    try
        # Get existing network adapters that are online
        if($SrIOV)
            #$sriov_adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            $adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            Enable-NetAdapterSriov $adapters # Enable SRIOV on the adapters
        else
            $adapters = Get-NetAdapterSriov | % Name
            #$adapters = Get-NetAdapter | ? Status -eq Up | % Name
        # Create NIC team
        if ($adapters.length -gt 1)
            Write-Host "$CurrentDate --> Creating NIC team $TeamAdapterName..."
            Write-Output "$CurrentDate --> Creating NIC team $TeamAdapterName..." | Add-Content $LogPath
            #New-NetLbfoTeam -Name "ConvergedNetTeam" -TeamMembers $adapters -Confirm:$false | Add-Content $LogPath
            New-NetLbfoTeam -Name $TeamAdapterName -TeamMembers $adapters -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false | Add-Content $LogPath
        else
            Write-Host "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming"
            throw "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming" | Add-Content $LogPath
        # Wait for team to come online for 60 seconds
        Start-Sleep -s 60
        if ((Get-NetLbfoTeam).Status -ne "Up")
            Write-Host "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required"
            throw "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required" | Add-Content $LogPath
        # Create a new Virtual Switch
        if($SrIOV) #SRIOV based VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch" -MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -EnableIov $true -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -EnableIov $true -AllowManagementOS 0
            $CreatedSwitch = $true
        else #Standard VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch"-MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -AllowManagementOS $false
            $CreatedSwitch = $true
        if($CreatedSwitch)
            #Set Default QoS
            Write-Host "$CurrentDate --> Setting default QoS policy on $SwitchName..."
            Write-Output "$CurrentDate --> Setting default QoS policy $SwitchName..." | Add-Content $LogPath
            #Set-VMSwitch "ConvergedNetSwitch"-DefaultFlowMinimumBandwidthWeight 30
            Set-VMSwitch $SwitchName -DefaultFlowMinimumBandwidthWeight 20
            #Creating Management OS Adapters (SYD-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "SYD-MGMT" -Access -VlanId 0
            #Creating Management OS Adapters (MEL-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MEL-MGMT" -Access -VlanId 0
            #Creating Cluster Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Cluster"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Cluster" | Add-Content $LogPath
            #Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "ConvergedNetSwitch"
            Add-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -MinimumBandwidthWeight 20 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-Cluster" -Access -VlanId 0
            #Creating LiveMigration Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -MinimumBandwidthWeight 40 -VmqWeight 90
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-MIG" -Access -VlanId 0
            #Creating iSCSI-A Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-A" -Access -VlanId 0
            #Creating iSCSI-B Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-B" -Access -VlanId 0
            Write-Host "Waiting 40 seconds for virtual devices to initialise"
            Start-Sleep -Seconds 40
            #Configure the IP's for the Virtual Adapters
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24 -DefaultGateway $CommanGatewayIPv4
            New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -ServerAddresses ($ManagmentDNS1, $ManagmentDNS2)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -IPAddress $ManagmentOS2IPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -ServerAddresses ($ManagmentDNS3, $ManagmentDNS4)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-Cluster)" -IPAddress $ClusterIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (HV-Cluster)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-MIG)" -IPAddress $LiveMigrationIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (LiveMigration)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-A)" -IPAddress $CSVIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (iSCSI-A)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-B)" -IPAddress $CSV2IPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (CSV2)" -ServerAddresses $ManagmentDNS1
            #Write-Host "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC"
            #Write-Output "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (VMNet)" -IPAddress $VMNetIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (VMNet)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Hyper-V Configuration is Complete"
            Write-Output "$CurrentDate --> Hyper-V Configuration is Complete" | Add-Content $LogPath
    catch [Exception]
        throw "$_" | Add-Content $LogPath
    I would really like to know why I'm getting absolutely poor performance. Any help on this would be most appreciated.

    I didn't parse the entire script, but a few things stand out.
    SR-IOV and teaming don't mix. The purpose of SR-IOV is to go straight from the virtual machine into the physical adapter and back, completely bypassing the entire Hyper-V virtual switch and everything that goes with it. Team or SR-IOV.
    You're adding DNS servers to adapters that don't need them. Inbound traffic is going to be confused, to say the least. The only adapter that should have DNS addresses is the management adapter. For all others, you should run Set-DnsClient -RegisterThisConnectionsAddress
    $false.
    I don't know that I'm reading your script correctly, but it appears you have multiple adapters set up for management. That won't end well.
    It also looks like you have QoS weights that total over 100. That also won't end well.
    I don't know that these explain poor performance like you're describing, though. It could just be that you're a victim of network adapters/drivers that have poor support for VMQ. Bad VMQ is worse than no VMQ. But, VMQ+teaming+SR-IOV sounds like recipe for
    heartache to me, so I'd start with that.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Poor Performance with 10.1

    I'm on a windows xp sp 3 machine with an x800 xt pe graphics card using the latest catalyst drivers from Ati.  Ever since I updated flash, I'm getting very poor performance during 720p with acceleration enabled.  I'm also getting bad performance when it's disabled.  Is there an issue with the 10.2 legacy (ati) drivers not enabling hardware acceleration with 10.1 flash?
    BTW I went back to using 10.0.42.34 flash and everything is running fine again.

    I've also noticed poor performance since installing the 10.1 plugin.  Not just in HD video etc, but in general use.  So much for "improved performance"...
    No idea on solutions at this stage - In the process of downgrading to previous version to see if that fixes my problems

  • Poor performance with SVM mirrored stripes

    Problem: poor write performance with configuration included below. Before attaching striped subdevices to mirror, they write/read very fast (>20,000 kw/s according to iostat), but after mirror attachment and sync performance is <2,000 kw/s. I've got standard SVM-mirrored root disks that perform at > 20,000 kw/s, so something seems odd. Any help would be greatly appreciated.
    Config follows:
    Running 5.9 Generic_117171-17 on sun4u sparc SUNW,Sun-Fire-V890. Configuration is 4 72GB 10K disks with the following layout:
    d0: Mirror
    Submirror 0: d3
    State: Okay
    Submirror 1: d2
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 286607040 blocks (136 GB)
    d3: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t4d0s0 0 No Okay Yes
    c1t5d0s0 10176 No Okay Yes
    d2: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t1d0s0 0 No Okay Yes
    c1t2d0s0 10176 No Okay Yes

    I have the same issue and it is killing our iCal Server performance.
    My question would be, can you edit an XSAN Volume without destroying the data?
    There is one setting that I think might ease the pressure... in the Cache Settings (Volumes -> <Right-Click Volume> -> Edit Volume Settings)
    Would increasing the amount of Cache help? Would it destroy the data?!?!?

Maybe you are looking for

  • Message validation in bsp

    hi experts, I want to give some msg pop up if user is not entering a date. like if user click on enter button without entering a date on the screen. the message will come that u have to enter a date. i have gvn a coding like this but when i am openin

  • Consolidated libary, now libary has disappeared! help!

    I have an Iphone and my husband has a nano, we run them both from the same computer under different user names. Been doing this for a long time with no problems, then decided it would be nice to be able to share music between accounts. To do this I l

  • Missing a song in my library

    Hellone

  • Stoping subscriptions to go to Workflow inbox

    Hello experts, In our system, all subscription emails are going to the workflow inbox in portal. Also, any email from the document approval process is going to the workflow inbox in portal. therefore, the inbox is getting very big. The items ander th

  • Intel Mac required?

    Is an intel mac required to use iPhoto'08? I understand that it is required to use iMovie'08. Thanks.