Best Practices for dashboards using Query Browser

Hi,
As aligned with business requirement, dashboards are using Query browser to fetch data from universes built on SSAS cubes. The issue here is it takes around a minute to initially load the dashboard which is very long as per requirement. Can I find any document or basic points to be taken care of, on improving performance for these dashboards.There are around 30 Queries being used.(cant integrate the queries to decrease the number). Also , using ranking logics at excel level, does this affect the performance?? (All vlookups and "should not be used" formulaes have been omitted)
Thanks in advance.

Hi Aarti,
Check the thread http://scn.sap.com/thread/3482541.
Any functions used in Excel level like ranking,vlookp etc affects the performance.
Also check the components you have used in Dashboard as some of them affects the performance.
Regards,
JC

Similar Messages

  • Best Practices For Dashboard design  through BI

    Hi
      If I want to create a dashboard through BI7 .what is the best way to suggest the client people. Which way SAP recommended for BI7 Best practices for dashboard design.
    Thanks & Regards,
    Praveen

    Sloved

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • Best Practice for CTS_Project use in a Non-ChARM ECC6.0 System

    We are on ECC6.0 and do not leverage Solution Manager to any extent.  Over the years we have performed multiple technical upgrades but in many ways we are running our ECC6.0 solution using the same tools and approaches as we did back in R/3 3.1. 
    The future vision for us is to utilize CHARM to manage our ITIL-centric change process but we have to walk before we can run and are not yet ready to make that leap.  Currently we are just beginning to leverage CTS_Projects in ECC as a grouping tool for transports but are still heavily tied to Excel-based "implementation plans".  We would appreciate references or advice on best practices to follow with respect to the creation and use of the CTS_Projects in ECC.
    Some specific questions: 
    #1 Is there merit in creating new CTS Projects for support activities each year?  For example, we classify our support system changes as "Normal", "Emergency", and "Standard".  These correspond to changes deployed on a periodic schedule, priority one changes deployed as soon as they are ready, and changes that are deemed to be "pre-approved" as they are low risk. Is there a benefit to create a new CTS_Project each year e.g. "2012 Emergencies", "2013 Emergencies" etc. or should we just create a CTS_Project "Emergencies" which stays open forever and then use the export time stamp as a selection criteria when we want to see what was moved in which year?
    #2 We experienced significant system performance issues on export when we left the project intersections check on.  There are many OSS notes about performance of this tool but in the end we opted to turn off this check.  Does anyone use this functionality?  Any reocmmendations?
    Any other advice would be greatly appreciated.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for the use of reserved words

    Hi,
    What is the best practice to observe for using reserved words as column names.
    For example if I insisted on using the word comment for a column name by doing the following:
    CREATE TABLE ...
    "COMMENT" VARCHAR2(4000),
    What impact down the track could I expect and what problems should I be aware of when doing something like this?
    Thank You
    Ben

    Hi, Ben,
    Benton wrote:
    Hi,
    What is the best practice to observe for using reserved words as column names.Sybrand is right (as usual): the best practice is not to use them
    For example if I insisted on using the word comment for a column name by doing the following:
    CREATE TABLE ...
    "COMMENT" VARCHAR2(4000),
    What impact down the track could I expect and what problems should I be aware of when doing something like this?Using reserved words as identifiers is asking for trouble. You can expect to get what you ask for.
    Whatever benefits you may get from naming the column COMMENT rather than, say, CMNT or EMP_COMMENT (if the table is called EMP) will be insignificant compared to the extra debugging you will certainly need.

  • Best practice for ConcurrentHashMap use?

    Hi All, would the following be considered "best practice", or is there a better way of doing the same thing? The requirement is to have a single unique "Handler" object for each Key:
    public class HandlerManager {
        private Object lock = new Object();
        private Map<Key,Handler> map = new ConcurrentHashMap<Key,Handler>();
        public Handler getHandler(Key key) {
            Handler handler = map.get(key);
            if (handler == null) {
                synchronized(lock) {
                    handler = map.get(key);
                    if (handler == null) {
                        handler = new Handler();
                        map.put(key, handler);
            return handler;
    }Clearly this is the old "double-checked-locking" pattern which didn't work until 1.5 and now only works with volatiles. I believe I will get away with it because I'm using a ConcurrentHashMap.
    Any opinions? is there a better pattern?
    Thanks,
    Huw

    My personal choice would be to use the reliable "single-checked-locking" pattern:
        public Handler getHandler(Key key) {
            synchronized(lock) {
                Handler handler = map.get(key);
                if (handler == null) {
                    handler = new Handler();
                    map.put(key, handler);
                return handler;
        }But I'm afraid the Politically Correct way of doing it nowadays looks as ugly as this:
    class HandlerManager {
        private Map<Key,Handler> map = new HashMap<Key,Handler>();
        private final Lock readLock;
        private final Lock writeLock;
        public HandlerManager() {
            ReadWriteLock lock = new ReentrantReadWriteLock();
            readLock = lock.readLock();
            writeLock = lock.writeLock();
        public Handler getHandler(Key key) {
            Handler handler = null;
            readLock.lock();
            try {
                handler = map.get(key);
            } finally {
                readLock.unlock();
            if (handler == null) {
                writeLock.lock();
                try {
                    handler = map.get(key);
                    if (handler == null) {
                        handler = new Handler();
                        map.put(key, handler);
                finally {
                    writeLock.unlock();
            return handler;
    }

  • Best practice for development using REST API - OData

    Hi All, I am new to REST. I am a developer who works mostly in server-side code using Visual Studio. Now that Microsoft is advocating to write code using REST API instead of server-side code or client side object model, I am trying to use REST API.
    I googled and most of the example shows to write a code and put it on Content Editor/Script Editor. How to organize code and deploy to the staging/production in this scenario? Is there any Best Practice or example around this?
    Regards,
    Khushi

    If you are writing code in aspx or cs it does not mean that you need to deploy it in the SharePoint server, it could be any other application running from your remote server. What I mean it you can use C# & Rest API to connect to SharePoint server.
    REST API in SharePoint 2013 provides the developers with a simple standardized method of retrieving information from SharePoint and it can be used from any technology that is capable of sending standard HTTP requests.
    Refer to the following blog that provide your more details about comparison of the major features of these programming choices/
    http://msdn.microsoft.com/en-us/library/jj164060.aspx#RESTODataA
    http://dlr2008.wordpress.com/2013/10/31/sharepoint-2013-rest-api-the-c-connection-part-1-using-system-net-http-httpclient/
    Hope this helps
    --Cheers

  • SqlCeConnection in CE 4.0: best practices for desktop use?

    I am using SQL Server CE 4.0 in a WinForms desktop application and was wondering what are the recommended practices for using SqlCeConnection in this context. I noticed some older threads on the topic however some of them talk mostly about mobile or are
    from a long time ago
    1. Should a new SqlCeConnection be created every time the DB is accessed or is it better to have on SqlCeConnection open and shared through the entire application?  My application can make frequent writes to the DB and when this happens I notice a noticeable
    UI lag if a new SqlCeConnection is created for which DB access.
    2. Can SqlCeConnection instances be shared across threads? On occasion my application performs DB reads in a background thread. Can I use the same SqlCeConnection instance from different threads?
    3. Are there any additional steps needed to ensure multi-thread safety when multiple threads can access the DB simultaneously when SQL Server CE is deployed in embedded mode? When I create a new SqlCeConnection for each DB access I ran into a few crashes
    and Monitor exceptions, possibly when multiple threads are accessing the same database.
    I am using SQL Server CE 4.0 embedded in a WinForms 2.0 Windows desktop application (non-mobile).  Any advice would be appreciated.

    You must use seperate SqlCeConnection etc object per thread, as these objects are not thread safe.
    I recommend opening the database on the main thread, and keep this connection unused and open for the lifetime of the app to keep the database file and engine "warm".
    Please mark as answer, if this was it. Visit my SQL Server Compact blog http://erikej.blogspot.com

  • Best practices for creating and querying a history table?

    Suppose I have a table of name-value pairs, and I want to keep track of changes to them so that I can query the value of any pair at any point in time.
    A direct approach would be to use a schema like this:
    CREATE TABLE NAME_VALUE_HISTORY (
      NAME      VARCHAR2(...),
      VALUE     VARCHAR2(...),
      MODIFIED DATE
    );When a name-value pair is updated, a new row is added to this table with the date of the change.
    To determine the value associated with a name at a particular point in time, one uses a query like:
      SELECT * FROM NAME_VALUE_HISTORY
      WHERE NAME = :name
        AND MODIFIED IN (SELECT MAX(MODIFIED)
                        FROM NAME_VALUE_HISTORY
                        WHERE NAME = :name AND MODIFIED <= :time)My question is: is there a better way to accomplish this? What indexes/hints would you recommend?
    What about a two-table approach like this one? http://pratchev.blogspot.com/2007/05/keeping-history-data-in-sql-server.html
    Edited by: user10936714 on Aug 9, 2012 8:35 AM

    user10936714 wrote:
    There is one advantage... recording the change of a value is just one insert, and it is also atomic without the use of transactions.At the risk of being dumb, why is that an advantage? Oracle always and everywhere uses transactions so it's not like you're avoiding some overhead by not using transactions.
    If, for instance, the performance of reading the value of a name at a point in time is not important, then you can get by with just using one table - the history table.If you're not overly concerned with the performance implications of having the current data and the history data in the same table, rather than rolling your own solution, I'd be strongly tempted to use Workspace Manager to let Oracle keep track of the changes.
    You can create a table, enable versioning, and do whatever DML operations you'd like
    SQL> create table address(
      2    address_id number primary key,
      3    address    varchar2(100)
      4  );
    Table created.
    SQL> exec dbms_wm.enableVersioning( 'ADDRESS', 'VIEW_WO_OVERWRITE' );
    PL/SQL procedure successfully completed.
    SQL> insert into address values( 1, 'First Address' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update address
      2     set address = 'Second Address'
      3   where address_id = 1;
    1 row updated.
    SQL> commit;
    Commit complete.Then you can either query the history view
    SQL> ed
    Wrote file afiedt.buf
      1  select address_id, address, wm_createtime
      2*   from address_hist
    SQL> /
    ADDRESS_ID ADDRESS                        WM_CREATETIME
             1 First Address                  09-AUG-12 01.48.58.566000 PM -04:00
             1 Second Address                 09-AUG-12 01.49.17.259000 PM -04:00Or, even cooler, you can go back to an arbitrary point in time, run a query, and see the historical information. I can go back to a point between the time that I committed the first change and the second change, query the ADDRESS view, and see the old data. This is invaluable if you want to take existing queries and/or reports and run them as of certain dates in the past when you're trying to debug a problem.
    SQL> select *
      2    from address;
    ADDRESS_ID ADDRESS
             1 First AddressYou can also do things like set savepoints which are basically named points in time that you can go back to. That lets you do things like create a savepoint for the data as soon as month-end processing is completed so you can easily go back to "July Month End" without needing to figure out exactly what time that occurred. And you can have multiple workspaces so different users can be working on completely different sets of changes simultaneously without interfering with each other. This was actually why Workspace Manager was originally created-- to allow users manipulating spatial data to have extremely long-running transactions that could span days or months-- and to be able to switch back and forth between the current live data and the data in each of these long-running scenarios.
    Justin

  • Firewall what is best practice for home use?

    Hi (from France) and thanks in advance.
    I want to activate firewall but don't know which setting to choose for the right mix of security/ease of use. My Imac connected by ethernet to Dartybox (router) and sharing internet access with my wife's Macbook pro via wifi .

    simonfromfra wrote:
    The 'dartybox' router has a very poor security rating - so my reading tells me, and is the easiest of all French internet  "out of the box all in one" connections to hjack (crack codes well documented)... hence my desire for second layer protection.
    Take a look at: IceFloor
    IceFloor is not a firewall. Your Mac already features three built-in firewalls:
    • ALF
    • PF
    • IPFW
    ALF is an application firewall, while both PF and IPFW are network firewalls.
    ALF can be configured using System Preferences “Security” pane.
    PF and IPFW do not have a default graphic interface shipped with OS X, so they can be configured only using the shell terminal. Using the terminal is the favourite choice by most system administrators.
    Since Mac OS X 10.7 IPFW has been deprecated. PF is the new default OS X network firewall. To configure PF without using the shell terminal we must use a graphic interface (frontend) from third party software makers.
    IceFloor is a graphic frontend for PF.
    It features a lot of advanced tools and options that will let you set up a very complex PF ruleset.
    IceFloor needs OS X 10.7 or newer.
    Bandwidth management with PF/Dummynet is supported on OS X 10.8 and newer.
    PF ruleset created with IceFloor 2.0 has been tested on:
    OS X 10.7 OS X 10.8 OS X 10.9 iOS 5
    iOS 6 FreeBSD 9 OpenBSD 4.3
    We developed also a limited version of IceFloor named PFLists.
    PFLists is a free and open source basic PF frontend with the same requirements as IceFloor 2.

  • Best Practice for Use of ABAP in Customizing SRM and/or CRM

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM.   Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Query Best Practice for Reports

    I am new to Apex and I am wondering what is the best practice for store your sql quries for reports.  I am a believer of storing all sql behind pacakge functions or procedures.  And it looks like the only options for report pages are to use a direct SQL query, or a function that returns a query as a string.  Yes the function method counts as putting the code in Oracle but not really.  It is still getting compiled and parsed on the Apex side.  It would be nice if Apex could handle a cursor but I have read that it doesn't directly. You have to have a function that returns a cursor and then create a pipelined function that calls the cursor function.  That is kind of silly.  Is there some other way to do this?
    Apex 4.2
    Oracle 11.2.0.2
    Thanks for any input.
    Jeff

    Hi Jeff,
    I'm not necessarily a believer in packaging queries. I'm a little more pragmatic in that I believe it may make sense in environments where you have a client environment that just expects a result set that is then manipulated by the client for the purposes of presentation, pagination etc. Apex has a different architecture in that the client is purely an HTML presentation layer (browser) and the presentation, pagination etc is formulated in the database along with the data using the Oracle web toolkit, which is a set of internal packages that produce HTML. Note that handling and manipulating ref cursors inside PL/SQL is not a joy, they were mainly designed to be passed out to external clients. (Often to shield programmers who don't or won't even try to understand relational concepts)
    This means that when you create a report based on a query, the Apex engine will manipulate that base query, depending on the display requirements and pagination requirements of your report, before it submits that query to the database for execution. To get an idea of how this manipulation occurs, you can run your report in debug mode and check the actual query that is submitted to the database. If the query is presented as an already executed ref cursor, then the Apex engine can't execute in the way that it does. As you have already found out, the only way of using packaged queries returning ref cursors is by the use of a pipelined function, so that the Apex engine can treat the result as a normal query.
    This is the architecture of Apex, and I suspect that re-engineering the Apex engine to handle ref cursors natively, as opposed to using a pipelining trick, would be a considerable change. I hope this at least helps to explain why ref cursors and Apex don't mix. I personally don't see the purpose of having an abstraction layer of packaged queries below an abstraction layer of an API such as Apex. SQL is a perfectly good API.
    Regards
    Andre

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

Maybe you are looking for

  • How to restore or re-install after disc problems?

    My Macbook Pro stopped starting up some days after installation of OS X Lion. It just opened yjhe gray screen with Apple logo and kept rotating the wheel and showing the progress bar. After lots of efforts with secure startup etc, I booted from the i

  • Updated desktop app and not all apps are showing  that are downloaded to my computer already

    have deleted the ppm.db file per chat with tech support and reinstalled desktop app and then per a forum just changed the same file by adding .old to the end of it but still only showing 2 out of 6 or more application  that are installed on my comput

  • CcBPM Scenarios Administering and Trobleshooting

    hi Team, I am new to PI, all though i been handling the other regular scenarios but not BPM, i am becoming clueless with BPM based scenarios. so pleae help me with some good BASIS related trobleshooting guides and Best Known Methods to Administer, Tr

  • Sizing dropdown lists

    v11.1.1.4 I have several pages that have multiple dropdown lists (af:selectOneChoice). Each list is sizing differently, based on the size of the data that is in the list, making the form look very uneven. How can I get these to all show up the same s

  • Safari and Snapfish (photo website)

    Having trouble downloading software to upload photos faster on Snapfish.com. When I click on the "Download Now" button, the entire page blinks as if nothing happened. Is there a setting I need to switch on Safari's preferences that would allow me to