Performance: many queries used in dashboard

Hi experts,
My dashboard contains more than 30 queries runing on 2 multiproviders. When looking in sm50 it seems as though only 2 queries are run in parallel after pressing the submit button. Moreover the queries that should provide results on the first tabpage in my model only give results after all other tables and charts are already filled.
I would like more queries to run parallel and to give the highest priority to queries on the first tabpage. Is there somewehere that I can make this setting (f.e. on multiprovider level)?
Answers are appreciated and will be awarded with points.
Thanks in advance,
Ralph

Hi,
I am facing a similar problem with our customer's application. However, activating the dedicated connections for nested iViews did not help. We have several iView, each containg 1 to 7 queries, approx. 30 Queries in the whole model. Each query runs about 1 sec, the whole model approx. 35 seconds. Even though the data flow is modeled in a parallel way (i.e. queries should be executed simultaneously).
Who can help?
Thanks in advance an kiond regards,
  Benni

Similar Messages

  • Using JDBC to perform spatial queries using SDO operators

    Hi,
    I want to create a routine to perform the SDO_ANYINTERACT function based on a rectangle as in the following query:
    select LINK_ID from SDO_LINK where
    SDO_ANYINTERACT ( SDO_LINK.LINK, SDO_GEOMETRY(2003,8307,NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3),
    SDO_ORDINATE_ARRAY(-122.38845,37.78889,-122.38651,37.78923))) = 'TRUE'"
    This query works fine when passed to the executeQuery method of a JDBC Statement. But when I tried to create a PreparedStatement and set the 4 parameters (lon1, lat1, lon2, lat2) that are passed to the SDO_ORDINATE_ARRAY function:
              String sql = "select LINK_ID from SDO_LINK where " +
                   "SDO_ANYINTERACT ( SDO_LINK.LINK, SDO_GEOMETRY(2003,8307,NULL,SDO_ELEM_INFO_ARRAY(1,1003,3),SDO_ORDINATE_ARRAY(" +
                   "?,?,?,?" +
                   "))) = 'TRUE'";
    I am getting an Oracle exception:
    java.sql.SQLException: ORA-03115: unsupported network datatype or representation
    I have tried setting the parameters using setString as well as setFloat methods but still get this error.
    Does anyone know what I am dong wrong? Anyone have an example of calling SDO_ANYINTERACT or other SDO operators via JDBC?
    Thanks
    BW

    Getting the connection is straightforward:
    try {
    Class.forName("oracle.jdbc.driver.OracleDriver");
    } catch (ClassNotFoundException e) {
    throw new SQLException (e.toString());
    Connection conn = DriverManager.getConnection(connURL, userName, userPass);
    I have validated that my query executes and returns results fine when I pass the entire query as a string to executeQuery:
    String sql = "select LINK_ID from SDO_LINK where SDO_ANYINTERACT ( SDO_LINK.LINK, SDO_GEOMETRY(2003,8307,NULL,SDO_ELEM_INFO_ARRAY(1,1003,3),SDO_ORDINATE_ARRAY(-122.38845,37.78889,-122.38651,37.78923))) = 'TRUE'";
    Statement stmt = conn.createStatement();
    ResultSet rs = stmt.executeQuery(sql);
    // Iterate through result set here
    The issue is I am not sure how to pass the 2 (lon,lat) cooridnates fior the SDO_ORDINATE_ARRAY operation in a prepared statement . Perhaps I have to just construct a string?
    For example, doing the following:
    String sql = "select LINK_ID from SDO_LINK where " +
         "SDO_ANYINTERACT ( SDO_LINK.LINK, SDO_GEOMETRY(2003,8307,NULL,SDO_ELEM_INFO_ARRAY(1,1003,3),SDO_ORDINATE_ARRAY(" +
         "?,?,?,?" +
         "))) = 'TRUE'";
    if (conn == null)
         openConnection();
    PreparedStatement pstmt = conn.prepareStatement(sql);
    pstmt.setFloat(1, Float.parseFloat(lon1));
    pstmt.setFloat(2, Float.parseFloat(lat1));
    pstmt.setFloat(3, Float.parseFloat(lon2));
    pstmt.setFloat(4, Float.parseFloat(lat2));
    Then calling:
    ResultSet rs = pstmt.executeQuery(sql);
    Which results in the exception.
    Actually, I found the problem. I want to NOT pass the sql string to the executeQuery statement when using the PreparedStatement.
    The following works:
    ResultSet rs = pstmt.executeQuery();
    Edited by: wisegb on May 21, 2009 5:39 AM

  • Poor performance of BLOB queries using ODBC

    I'm getting very poor performance when querying a BLOB column using ODBC. I'm using an Oracle 10g database and the Oracle 10g ODBC driver on Windows XP.
    I create two tables:
    create table t1 ( x int primary key, y raw(2000) );
    create table t2 ( x int primary key, y blob );
    Then I load both tables with the same data. Then I run the following queries using ODBC:
    SELECT x, y FROM t1;
    SELECT x, y FROM t2;
    I find that the BLOB query takes about 10 times longer than the RAW query to execute.
    However, if I execute the same queries in SQL*Plus, the BLOB query is roughly as fast as the RAW query. So the problem seems to be ODBC-related.
    Has anyone else come across this problem ?
    Thanks.

    Hi Biren,
    By GUID, are you referring to the Oracle Portal product?

  • Performing sql queries in java without using java libraries

    i wonder whether its possible to perform sql queries beginning from create table to updating queries without using the java sql library.
    has anyone written such code.

    You could use JNI to talk to a native driver like the Oracle OCI driver. Doing this is either exiting or asking for trouble depending on your attitude to lots of low level bugs.

  • Performance problems with use SCOPE instruction?

    Hi All!
    We have application work with cube bases on SSAS 2008 R2. Also we use writeback function for change data in cube.
    Now I'm looking bottleneck in our queries.
    We have following MDX query(for example):
    select
    non empty{
    ([Date].[Date].[All].children
    , [Forecast Type].[Forecast Type].[All].children
    , [Stock].[Stock].[All].children
    , [Shipment].[Shipment].[All].children
    , [Invoice].[Invoice].[All].children
    , [Way Bill External].[Way Bill External].[All].children
    , [SD User].[User].[All].children
    , [SD Date].[Date].[All].children
    , [CD User].[User].[All].children
    , [CD Date].[Date].[All].children
    , [Forecast Basis].[Forecast Basis].[All].children
    , [Orders].[Orders].[All].children
    , [Rolling Forecast].[Rolling Forecast].[All].children
    , [Long Range Forecast].[Long Range].[All].children
    , [Calculated FCCR].[Calc Price].[All].children
    , [Write Table Guids].[GuidObj].[All].children)
    } dimension properties member_unique_name
    , member_type
    , member_value on rows
    , non empty {({[Measures].[Price CR]
    , [Measures].[Cost]
    , [Measures].[Cost USD]
    , [Measures].[Cost LME]
    , [Measures].[Cost MWP]
    , [Measures].[Weight]
    , [Measures].[Weight Real]})} dimension properties member_unique_name
    , member_type
    , member_value on columns
    from [MainCubeFCT]
    where ({[Currency].[Currency].&[4]}
    , {[Forecast Basis].[Customer].&[4496]}
    , {[Forecast Basis].[Consignee].&[4496]}
    , {[Forecast Condition].[Forecast Condition].&[1]}
    , {[Forecast Basis].[Alloy].&[56]}
    , {[Date].[Year Month].[Month].&[2015-05-01T00:00:00]}
    , {[Date Type].[Date Type].&[2]}
    , {[Forecast Basis].[Business Sphere2].&[4]}
    , {[Forecast Status].[Forecast Status].&[2]})
    Duration execution this query(Query end):
    cold(after clear cache) - 1000
    warm - 500
    Max loss on Calculate Non empty event - 95%.
    After some operations I found bottleneck in 2 measures: [Measures].[Weight], [Measures].[Price CR]
    If them deleted from query then duration execution equals 50.
    In our cube measure [Measures].[Weight] override in calculation as: 
    scope([Measures].[Weight]);
    This = iif((round([Measures].[Weight], 3)<>0), round([Measures].[Weight], 3), null);
    end scope;
    But if I change code as 
    scope([Measures].[Weight]);
    This = [Measures].[Weight];
    end scope;
    Performance query does not improve...
    If delete this override from calculation in cube. I get good performance acceptable to me.
    We need to keep the business logic and get acceptable performance.
    What wrong in measures, calculations or query? Any ideas?
    If need additional information let me know.
    Many thanks, Dmitry.

    Hi Makarov,
    According to your description, you get performance issue when using SCOPE() statement. Right?
    In Analysis Services, when using SCOPE() statement, it redefines that part of your cube space while the calculated member is much more isolated. In this scenario, I suggest you directly create a measure. Because calculated measure only returns values
    where the amounts are recorded directly to the parent without including children values.
    Reference:
    Analysis Services Query Performance Top 10 Best Practices
    Top 3 Simplest Ways To Improve Your MDX Query
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Issues with Using Data Dashboard and Using Web Service

    After running a system for many months it has suddenly decided to not work at all.  I am using Data Dashboard to read variables on a host.  I'll outline the system here to get a baseline:
    I'm running LV 2012 SP1 f5. 
    I collect data from one Ethernet port and interfact it to another Ethernet port to form a proxy.
    I use a fixed IP address from my DSL along with port forwarding/port triggering in my router for data observation and control.
    So, what have I done so far?
    1.  Repaired my original LV 2012 SP1 and tested system.
    2.  Upgraded to LV 2012 SP1 f5 and tested system.
    3.  Ran service.msc and checked all the NI services that should be operational.  Started and stopped several of them and rebooted computer.
    4.  Deployed all my SVs (multiple times)
    5.  Disconnected second Ethernet interface becasue sometimes the variables would deploy to this IP address.
    6.  Turned off my wireless because sometimes the variables would deploy to this IP address.
    7.  Checked "Web Server Configuration" "Application Web Server" for correct port number.
    8.  Checked "Web Services Management" and under "Deployed Web Services" there is one category:  SV is marked as running.  NOTE!!  WebService is NOT listed and this could be a problem
    I can use the following to retrieve data&colon;  localhost:<port number>/sv/variable.  But when I substitute my DSL IP number, I can no longer get any data.
    So, what am I missing?  
    Thanks,
    Paul.

    Siana,
    Thanks for your response.
    I haven't touched the NI code in months.  It is running on an XP laptop and I have the Windows updates turned off.  McAfee is running on it and of course that updates regularily.  This laptop is a dedicated machine that I only use for this function, nothing else.  I haven't added or subtracted any programs for many months.
    There have been no changes to the DSL from me.  AT&T is constantly trying to get us to upgrade to U-Verse but we have resisted.  All DSL modem and router hardware is the same.  
    Confusing, huh?

  • Currency Translation Type in queries using currency conversion

    I have a question on the Currency Translation Type (EUR_VAR) that is used in all of the queries using currency conversion on the fly. (currency is maintained automatically nor in table.)
    User wants to use 2 different exchange rates in a single query. The months in 2010 (Actuals) are to
    be converted using Xchangerate-type EURO and the months in 2011 (Planned) to use Xchangerate-type USD.
    But store different rates with different starting dates. This is however not possible because the Currency Translation
    Type is set-up (1) to work with Query Key date - rather than a characteristic in the data and (2) apparently these
    currency translation types only work with time characteristics like 0fiscyear
    My idea is therefor to:
    1. Create a new variable (similar to EXC_RATE) to prompt for a 2nd Exchange Rate type when query starts
    2. Create a new Currency Translation Type (next to EUR_VAR), referencing the new variable or sticking to fixed Xrate type, fixed to EUR
    Is this possible to create Idea (2)
    Many Thanks in Advance.

    The best way would be to create two curr conversion types , one converting to EUR and other to USD .Put them in properties of coressponding KFs in query.For timref in RSCUR , variable time ref can be used individually for two conv types.

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • Performance of queries against large AD CS databases - how to optimize?

    I am asking experts with experience with AD CS databases with 100.000s or millions of certificate to confirm or correct my "theories".
    I am aware of these two articles that state performance is not an issue for millions of certificates:
    Windows CA Performance Numbers and
    Evaluating CA Capacity, Performance, and Scalability
    However, here performance is mainly evaluated in terms of database size and request / certificate throughput. I am more interested in the performance of queries as I have seen that it might take minutes to build up views for databases with 100.000s of certificates
    - no matter if you use certutil -view, certsrv.msc, or access to CCertview.
    Could this just be due to an "unfortunate" combination of non-indexed fields? Any advice on which queries to avoid?
    Or is the solution just as simple as to throw more memory or CPU or both at the problem?
    In case it hinges on an unfortunate choice fields and you absolutely have to do this query my guess is that you have to use a custom policy(*) module (FIM or third-party) to dump certificates to a SQL database and do your queries there.
    Am I right or did I miss something? Any input is highly appreciated!
    Elke
    PS / edit: That should have been 'Exit module' - I don't know why I wrote Policy Module. Thanks for Vadims for pointing it out.

    > I meant 'exit module'
    exit module is correct one. However, it is notified by a CA only when new request is issued/processed. This means that you can use Exit Module to copy certificate information to SQL only for new requests, for existing requests you are still sticking
    with a database dump.
    > but I should probably check how I dealt with the row handles
    I don't know how COM handle are working in VBS, but in PowerShell (and other CLR languages) COM handle may not be handled properly by a garbage collector, therefore, when COM object is not necessary, you should set reference count to zero. In CLR it is made
    by calling Marshal.ReleaseComObject method. This will mark COM object as safe for garbage collector. For example, the typical row/column iterator scheme is:
    $Row = $ICertView.OpenView()
    # do row iteration
    while ($Row.Next() -ne -1) {
    # acquire IEnumCERTVIEWCOLUMN COM object
    $Column = $Row.EnumCertViewColumn()
    # do column iteration for the current row
    while ($Column.Next() -ne -1) {
    # collect column information and other stuff
    # do other stuff if necessary
    # release IEnumCERTVIEWCOLUMN object. This is the last line in the while/do loop.
    [Runtime.InteropServices.Marshall]::ReleaseComObject($Column)
    # release IEnumCERTVIEWROW COM object as well
    [Runtime.InteropServices.Marshall]::ReleaseComObject($Row)
    My weblog: en-us.sysadmins.lv
    PowerShell PKI Module: pspki.codeplex.com
    PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
    Check out new: SSL Certificate Verifier
    Check out new:
    PowerShell FCIV tool.

  • Query performance problem when using hierarchies

    Hello All,
    I've a query which is built on a hieracrhy with the following structure for eg.
    <b>A</b>
    |_<b>B</b>
    | |_L1
    |
    |_<b>C</b>
      |_L2
    When I restrict the query to hierarchy levels B and C simultaneously the query executes fine. But when i directly restrict the query to heirarchy level A , the query runs endlessly.
    Could some one please help me out as to why this is the case ?
    I don't have aggregates built on any of the hierarchy level.
    Best Regards,
    Sanjay

    Hi Roberto,
    thanks for your response. However, the problem is not solved even after applying the suggestions of the note 738098 :(. These queries used to execute fine until yesterday and there have been no major additions to the hierarchy. Please let me know if there is any thing else that can be done. We are planning to bounce the system and see if there are any performance improvements.
    PS: I've awarded points to you nevertheless, as the option suggested in the note seems useful and should be tried in case of these kind of performance issues
    Best Regards,
    Sanjay

  • Performance problem when using CAPS LOCK piano input

    Dear reader,
    I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
    Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
    Thanks and regards,
    Tim Metz

    Does your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?

  • When to use Perform Changing and using. and Table

    Hi,
    What is use of Perform chaging and using ?
    in what situation we need to use it ?

    Hi Jim,
    PERFORM: PERFORM command is used to call an ABAP subroutine (form) from any program, subject to the normal ABAP runtime authorization checking.Such calls to subroutines can be used for carrying out calculations, for obtaining data from the database that is needed at display or print time, for formatting data, and so on.
    PERFORM commands, like all control commands, are executed when a document is formatted for display or printing. Communication between a subroutine that you call and the document is by way of symbols whose values are set in the subroutine.
    ABAP Subroutine example is given in this link
    http://help.sap.com/saphelp_nw70/helpdata/en/9f/db979d35c111d1829f0000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d1/803279454211d189710000e8322d00/content.htm
    Passing the values in an internal table has many scenarios like pass by value pass by reference pass by value and referenceSuch scenarios are well explained in the following link
    http://help.sap.com/saphelp_nw70/helpdata/en/9f/db979035c111d1829f0000e829fbfe/content.htm
    Revert for further clarification,
    Thanks and Regards
    Srikanth.P

  • Send many reports using a single i bot..

    hi all..
    i have a requirement where i have to send many reports using a single i bot instance.
    How can this be achieved??
    Plz help guys..
    regards
    Mahi

    Its not possible i guess because only one report can be attached in an IBOT. Workarounds are you can add reports to dashborad an can send that dashboard page. Another thing is you can chain the ibots and put all reports in those ibots.In this case you have to kick off only one ibot but multiple emails will be fired.
    Regards,
    Sandeep

  • Can we perform Join operation using SQLCall with Datatabae Query

    Hi,
    I am working on Toplink SQLCall query. I am performing join operation but, it is giving error.
    so please, any can tell me . we can perform join operation using SQLCall with Database Query
    Thanking You.

    You can use joining with SQLCall queries in TopLink, provided your SQL returns all of the required fields.
    What is the query you are executing and what error are you getting?

  • For my game's better performance, should i use Starling?

    I heard that using Starling gives better performance than just using Flash pro Native (GPU mode??) when playing flash games on smartphones.
    However, according to this article, there is not much difference between GPU mode and Starling although its recorded in late 2012.
    http://esdot.ca/site/2012/runnermark-scores-july-18-2012
    My game is tile matching game that uses vectors and many different tile pictures. also upto 64 tiles can be present at the same time.
    I don't know how much more performance Starling would provide, but if starling would give more performance, i don't know if its worth the time and effort to learn how to use Starling and change my current codes to use Starling?

    This is a test between multiple frameworks that all use Stage3D, which is basically the means to get any hardware benefits from the GPU.
    These frameworks do nothing else than helping to streamline your game development and doing some optimizing (object pooling etc.) under the hood.
    The basic concept is to have spritessheets (for 2D) , that are also called "Textureatlas`" instead of the "old" method of having  separated MovieClips/sprites.
    If you dont use this method in your game, then you will have indeed no benefit from Starling or any other Stage3D framework.
    So if you your game is coded "like in the old days" you would have to rewrite some parts of it and convert all MovieClips to Spritesheets to benefit from the GPU.
    The real Performance-comparison reads like this:
    CopyPixels (the PreStage 3D method) had a Perfomance gainof 500%/ SpriteSheet (Stage3D) 4000% compared to the "old way".
    It all depends if you`re unhappy with your games curretn performance on current mobile devices or not.

Maybe you are looking for

  • I have a new yfi address and now my printer is now working.

    I had to get a new yfi modem so got a new address now my wireless printer is not working.  I have tryed unpluging it but it is still not working.  What do I do.?

  • How to send a request parameter to a dialog pages with jsf

    I'm using the dialog framework of oracle, java server faces....... I need to open a url on a dialog page(request scope), and I must to pass request parameters. I put <f:param> objects inside the commandLink who I click..... If the page is not dialog,

  • How can I create a mail attachment without a file suffix?

    Hi, I'm trying to create anl attachment without a suffix. Here is my code:                TextPayload attAnlFile = msg.createTextPayload();                attAnlFile.setName("File");                attAnlFile.setContentType("");                attAnl

  • Itunes error message for iphone

    When I try to sync my iphone itunes is telling me i don't have the required disk. Any way I can get the required disk? What is it?

  • Downloading Adobe Acrobat 8 Professional from Adobe site

    I just had my computer wiped at Staples and I forgot that my disc drive doesn't work! Now I need to reload my Acrobat and I don't know what to do!? Can I download Adobe Acrobat 8 Professional from the Adobe site using the serial number on my box? Tha