Query performance on RAC is a lot slower than single instance

I simply followed the steps provided by oracle to install a rac db of 2 nodes.
The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
However the performance on the select query is very slow compared to single instance.
I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
Could someone help me how to debug this problem?
Thanks,
Chau
Edited by: user638637 on Aug 6, 2009 8:31 AM

top 5 timed foreground events:
DB CPU: times 943(s), %db time (47.5%)
cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
You should check your sql statement from report and tuning them.
- Check from Execute Plan.
- If not index, determine to use index.
SQL> set autot trace explain
SQL> sql statement;
cursor: pin S wait on X.
A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
use variable SQL , avoid dynamic sql
http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
check about memory MEMORY_TARGET initialization parameter.
By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
Good Luck

Similar Messages

  • Broadband a lot slower than it used to be.

    Hi,
    Looking for some help and/or advice please. My broadband had been running fine for months (tbh lost track could be a year or two) maxing out at about 750 kBps = 6MB when downloading. I got the letter in from BT a couple of months ago suggesting they were gonna improve the service countrywide and I should get some kind of benefit in speeds. Not too long after that my max download speed dropped by 75% to around 1.5MB. I found this forum and read about many others having similar problems so I figured it was a side effect of the work being done on the lines and just to be patient. After a few weeks my speed went back up to around 3MB which i hoped was a step in the right direction. It remained this way for a few weeks then suddenly dropped back down to 1.5MB a few days ago. Unfortunately it has remained this way since then.
    I have ran the check on the BT page where you input your number and they tell you what kinda connection you should expect, it came back as around 4.5MB.
    I ran the speedtest on the BT site and got the following results :
    1. Best Effort Test: -provides background information.
        Download Speed
        1578 Kbps
    0 Kbps    2000 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 1578 Kbps
     For your connection, the acceptable range of speedsis 800-2000 Kbps.
     Additional Information:
     Your DSL Connection Rate :1776 Kbps(DOWN-STREAM), 445 Kbps(UP-STREAM)
     IP Profile for your line is - 1500 Kbps
    The throughput of Best Efforts (BE) classes achieved during the test is - 20.82:20.1:59.08 (SBE:NBEBE)
    These figures represent the ratio while sententiously passing Sub BE, Normal BE and Priority BE marked traffic.
    The results of this test will vary depending on the way your ISP has decided to use these traffic classes.
    2. Upstream Test: -provides background information.
        Upload Speed
        364 Kbps
    0 Kbps    445 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 364 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 445 Kbps
    We were unable to identify any performance problem with your service at this time.
    Which obviously agrees with what I am getting.
    I dont have a BT hub in use at the mo so here are the equivalent (I hope) details from my modem/router ...ehm...thingy (technical jargon )
    Modem Status
     Connection Status   Connected
    Us Rate (Kbps)   445
    Ds Rate (Kbps)   1776
    US Margin   22
    DS Margin   29
    Trained Modulation   ADSL2Plus
    LOS Errors   0
    DS Line Attenuation   30
    US Line Attenuation   19
    Peak Cell Rate   1049 cells per sec
    CRC Rx Fast   0
    CRC Tx Fast   0
    CRC Rx Interleaved   8
    CRC Tx Interleaved   0
    Path Mode   Fast Path
    There has been no major changes that i can think off in the last while and when these readings were taken my modem had been up and connected for over 90 hours.
    Would someone be able to explain, advise about and possibly help me sort out this problem pls.
    Thanks if you managed to read this far

    @robo878
    Thanks very much for the info and help robo, not sure I fully understand what it means but certainly a lot more than I did before I posted. TY I will have a look into getting it switched to see what difference that makes.
    @imjolly (great name think my fav char was the 'polis' with the flying goggles assuming you mean the rev i.m.jolly)
    Thank you too for your reply. I think you may well be right about the letter, just remember it being about a possible 100% increase or such, which would make sense going from a possible 8 to 20MB. At the moment I am just connected via an extension cable to the standard socket with a splitter for my phone and my modem. I have found the video on the BT re the test socket, is that worth giving a go? My phone is a cordless unfortunately so may well not be ideal/accurate but I am getting a bit of 'fuzz' and a couple of slight 'clicks'. Don't know anything about line noise and what it should be so sorry I can't tell you more.

  • 3.x query performance on upgraded BI 7.0 - worse than before upgrade?

    Dear Sirs,
    upgraded a BI system this weekend from 3.x to 7.0. Purely technical upgrade so no upgrade to 7.0 queries yet.
    What we have seen is that queries with "loose"/lagre selection ? (e.g all plant .. all months) etc have worse performance than before upgrade.
    One example: a query went from 26second to 60 seconds to run.
    Small selection (e.g one specific plant and date) is same run time.
    Has anyone had similar experience?
    Is BI 7.0 optimized for 7.0 queries, or are there any performance parameteres I could look at.
    Best regards,
    Jørgen

    Check this thread:
    Performance problems on NW04S/BI 7.0 after the upgrade
    Additional:
    [Improving Query Performance by Effective and Efficient Maintenance of Aggregates|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/906de68a-0a28-2c10-398b-ef0e11ef2022]
    BI Performance Improvements in 7.0
    BI Performance Improvements in 7.0
    Hope this helps.
    Regards
    Andreas

  • IPad Air wifi sync a lot slower than iPhone 5s?

    I just got my new iPhone 5s yesterday and found out that its wifi syncing to itunes is a lot faster than the iPad Air I bought a few days ago. Actually iPad Air wifi syncing is so much slower than iPhone 5s, almost half speed, just as slow as wifi syncing with my iPhone 4, is this normal?
    I've tested another iPad Air of my friend, and found out that his new iPad is as slow as mine. Anyone know WHY? Is there anything / settings we're missing?
    I have the latest AirPort Extreme as wifi router.

    It's not much help but here is another thread with similar problems.
    https://discussions.apple.com/message/23986417#23986417

  • Query performance is slow in RAC

    Hi,
    I am analyzing the purpose of Oracle RAC and how it will fit/useful into our product. So i have setup two nodes RAC 10g in our lab and i am doing various testing with RAC.
    Test 1 : Fail-over:
    ~~~~~~~~~~~
    First i have started with fail-over testing, done two types of testing "connect-time" failover and "TAF".
    Here TAF has some limitation, it's not handle DML transactions.
    Test 2 : Performance:
    ~~~~~~~~~~~~~~
    Second, i have done performance testing. Here i have taken 10,000 records for insert, update, read and delete operations with singe and two node instances. Here there is no performance difference with single and two nodes.
    But i am assumed, RAC will provide high performance than single instance oracle.
    So i am in confusion, whether we can choose Oracle RAC to our project.
    DBAs,
    Please give me your answers for my following questions, it will be great helpful for me to come to conclusion:
    1. What is the main purpose of RAC (because in my assumption, failover is partially supported and no difference in performance of query processing) ?
    2. What kind of business enviroment RAC will perfectly fit ?
    3. What is the unique benefits in RAC, which is not in single instance Oracle?
    Thanks
    Edited by: Anandhan on Aug 7, 2009 1:40 AM

    Hi !
    Well RAc ensures High Availibility. With Conditions Apply !
    For the database create more than 1 service and have applications connected to the database using these services created.
    RAC is service driven to access the database. So if plannned thoughtfully, Load on the database can be distributed physically using the services created for the database.
    SO if you have a single database servicing more than one application( of any type(s) ie oltp/warehouse etc.) connect to the database using different services so that the Init parameters are set for the purpose of the connection.
    NOTE: each database instance running on node can have different Init_sid.ora to ensure optimum perfromance for the designated purpose.
    RAC uses CSS with cache fusion for reducing I/O on a running production server by transferring the buffers from the Global cache to nodes when required thus reducing the Physical Reads. This is contribution to the perfromance front.
    Any database that requires access with different init.oa for the same physical data; RAC is the best way!
    For High Avail. use TAF type service.

  • Urgent: WAD performance is very slow than query performance

    Hi,
       If i execute the report(using query with variables) which contains 8,50,000 records in WAD then its taking more than 900 seconds and say Connection timed out at the end.
      If i execute the same query in Query designer using Web browser then it will take 400 seconds to show all the data in hierarchy or tabular view.
      I've done tuning using RSRT on query read mode and persistent mode etc..
      Can you please help me?
    THanks in advance. Points will be given...
    Reg,
      Varun

    Varun,
    Did you ever solve the performance issue of WAD report.  We are having the same issue of WAD performance lot slower than executing just as a Query.
    Thanks

  • Why is Thunderbolt so much slower than USB3?

    I'm considering two different drives for Time Machine purposes. Both are LaCie. Either of these:
    - Two Porsche 9233 drives, 4 TB each
    OR
    - A 2Big Thunderbolt drive, 8 TB, which I would configure as RAID 1 (a mirrored 4 TB volume)
    My question is this: I've viewed both of these product pages via the Apple Store, and I noticed that LaCie's information for the Thunderbolt drive makes it a lot slower than the USB drives. Meaning: They say that the 2Big Thunderbolt drive maxes out at like 427 MB/s, whereas the Porsche USB drives max out at 5 GB/s. Why is this? Isn't Thunderbolt supposed to be a lot faster than USB (any iteration)?

    Not an easy question, short of a whole lot more detail on the construction of those two devices.   You're likely going to need to look at the details of the drives and probably at some actual data.   You're really looking for some real benchmark data that you can compare, in other words.    Particularly which (likely Seagate) drives are used in those (IIRC, Seagate bought LaCie a while back), and what the specs are.
    The hard disk drives themselves are a central factor, where the drive transfer rate is a key metric for big transfers (and that can be based on drive RPM as much as anything, faster drives can stream more data, but they tend to need more power and run hotter), and access (seek) time for lots of smaller transfers (faster seeks mean faster access, so good for lots of small files scattered around).  Finding the details of the drives can be interesting, though.  I've seen lots of cheaper disks that spin very slowly, which means that they can have nice-looking transfer times out of any cache, but then... you... wait... for... the... disk... to... spin.
    The device bus interfaces can also vary (wildly) in quality.   I've seen some decent ones, and I've seen some USB adapters that were absolute garbage.   Some devices have decent quantities of cache, too.  Others have dinky caches, and end up doing synchronous transfers to hard disks, and that's glacial compared with memory speeds.
    One of your example configurations also features RAID 1 mirroring, which means that each write is hitting both disks.   The writes have to pass through a controller that can do RAID 0 mirroring, and that can write the I/O requests to both drives, and that can read the data back from (if it's clever) whichever of the two drives is best positioned in related to the sectors you're after.   If it's dumb, it won't account for the head positions and drive rotation and sector target.   Hopefully the controller is smart enough to correctly deal with a disk failure; I've met a few RAID controllers that weren't as effective when disks had failed and the array was running in a degrated mode.  In short, RAID 1 mirroring is a reliability-targeted configuration and not a performance configuration.  It'll be slower.  Lose a disk in RAID 1 mirroring, and you have a second disk with a second copy.    If the controller works right.
    If you want I/O performance without reliability, then configure for RAID 0 striping.   With that configuration, you're reading data from both disks.  But lose a disk in a RAID 0 striping configuration and you're dealing with data recovery, at best.  If the failure is catastrophic, you've lost half your data.
    But nobody's going to make this choice for you, and I'd be skeptical of any specs outside of actual benchmarks, and preferably benchmarks approximating your use.  Reliability is another factor, and that's largely down to reputation in the market; how well the vendor supports the devices, should something go wrong.  One of the few ways to sort-of compare that beyond the reviews is the relative length of the warranty, and what the warranty covers; vendors generally try to design and build their devices to last at least the length of the warranty.
    Yeah.  Lots of factors to consider.  No good answers, either.  Given it's a backup disk, I'd personally tend to favor  eliability and warranty and less about brute speed.
    Full disclosure: no experience with either of these two devices.  I am working with Promise Pegasus Thunderbolt disk arrays configured RAID 6 on various Mac Mini configurations, and those support four parallel HD DTV video streams with no effort.  The Pegasus boxes are plenty fast.  They're also much more expensive than what you're looking at.

  • Possible answer to LR 2.6 & 2.7 Seeming to run slower than earlier versions

    I just downloaded and unpgraded from LR 2.6 to 2.7 on a Windows Vista 64-bit machine.  Version 2.6 seemed to run a lot slower than 2.5.  When I read the LR2.7 readme file on the download page I noticed one of the known issues is that at least ver. 2.7 runs slower when there are lots of files in the recycle bin. after I upgraded I emptied the recycle bin (Of almost 3000 files), and yes indeed performance did improve - dramatically.  This also improved performance on LR 3 Beta 2.

    Thanks for the suggestions.  I have tried each idea in the order suggested. Unfortunately I have not yet solved the problem.  In some circumstances, with certain paper profiles and no other color management in place, I get faulty prints but they have large areas of solid color in areas where there is a large amounts of color saturation in the original image. The faulty image below is printed to plain letter-sized paper ar Normal Quality. One clarification I wish to make is that my HP printer driver was not specifically a 64-bit driver, but rather 32-bit/64-bit driver compatable for Win 7.
    I may try completely uninstalling LR before reinstalling, instead of a "repair-install" but I am still open to other suggestions.
    Camron

  • Flash player 10.1 slower than 10

    At least, that's how it is on my secondary computers. On my Dell Inspiron B120, flash 10.1 latest is like a train wreck when it comes to playing Youtube videos in full screen. Disabling hardware acceleration helped a little, but this shouldn't be happening as I uninstalled 10.1 and installed an archived version of Flash 10 with much better playback AND with h/w acceleration enabled. The laptop uses XP SP3 and Firefox.
    Simply put, Flash 10.1 feels a lot slower than 10 on my Dell laptop (1.4 GHz celeron-M, intel 915g video, 512 mb ram)
    And there's another odd but minor problem I seem to be having with is flash animations on my main computer and any other computer/laptop I have using Flash 10.1. I get this alternating 'lag' every few seconds; it's like the computer is dropping frames because the CPU can't keep up. I could completely understand this, but this is happening on practically any flash animation I throw at the computer. I just tested a flash movie (SWF format) on my main computer (2.8 GHz AMD Athlon x2 240; 2 gb ram, Windows 7 x64, Firefox) and I get this irritating choppy lag here and there. Again, with the previous Flash version 10, I did NOT have this problem AT ALL. Everything was silky smooth; the only benefit I'm getting from Flash 10.1 is x264 hardware acceleration for my ATI Radeon 4200 HD. Before anyone asks, all of my computers were using the latest flash version AND video drivers prior to these tests.
    I am VERY frustrated with these problems as I've posted a similar thread twice now in the past. No one seems to be answering or helping me with this; Those that do reply simply claim that they're having the same problem.
    Adobe, please look into this problem. Eventually Youtube and several other sites will force me to upgrade my laptop to Flash 10.1.

    Has Adobe fixed the problems with version 10,1,82,76 of flash player ?
    I compared it and found nothing but problems.
    I then uninstalled 10,1,82,76 and rolled back to 10,0,12,36 which works fine on Firefox and Opera but I can't install 10,0,12,36 on Exploer or Chrome
    it keeps telling me there is a new version even when I try to install it from local drive with a archived version.
    With all the complaints about 10,1,82,76 and the lack of response from Adobe on any fixes I will not be using Adobe auto updates until I let some
    other poor guinea pigs suffer all the bugs and problems first.
    You would expect a large company to have better development and testing before imposing buggy upgardes on us.

  • Switchover between primary RAC and standby single instance

    Hello All,
    I am using Oracle 11gR2.
    I am trying to do a switch over between primary database (RAC 2 nodes) and physical standby (single instance)
    If my Primary is single instance i was following the below steps:
    On the standby
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    On the primary database:
    alter database commit to switchover to standby with session shutdown;
    shutdown immediate;
    startup nomount;
    alter database mount standby database;
    On the standby again:
    alter database commit to switchover to primary WITH SESSION SHUTDOWN;
    On the new standby:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;now, and since my primary is RAC when i am trying switch over I am getting the below error:
    SQL> alter database commit to switchover to standby with session shutdown;
    alter database commit to switchover to standby with session shutdown
    ERROR at line 1:
    ORA-01105: mount is incompatible with mounts by other instancesPlus that when I want to apply the remaining steps (below step), should I do it on each instance alone? or is there anyway to do it using the srvctl command:
    alter database mount standby database;Regards,

    Hi,
    Since You are using 2 node RAc as primary, so for switchover operation you need to shutdown
    one database instance( Suppose instance 2).
    Suppose your node1:
    hostname is dcpdb1
    and node 2:
    hostname is dcpdb2
    and standby hostname is drpdb1
    So follow this steps for switchover .
    How to Switchover from Primary to Standby Database?
    Process:
    On the primary server, check the latest archived redo log and force a log switch.
    *########### Login dcpdb1 as Oracle user #########*
    SQL> SELECT sequence#, first_time, next_time
    FROM v$archived_log
    ORDER BY next_time;
    SQL> ALTER SYSTEM SWITCH LOGFILE;
    Check the new archived redo log has arrived at the standby server and been applied.
    *########### Login drpdb1 as Oracle user #########*
    SQL> SELECT sequence#, first_time, next_time, applied
    FROM v$archived_log
    ORDER BY next_time ;
    *########### Login dcpdb2 as Oracle user #########*
    SQL> SELECT sequence#, first_time, next_time
    FROM v$archived_log
    ORDER BY next_time ;
    SQL> ALTER SYSTEM SWITCH LOGFILE;
    Check the new archived redo log has arrived at the standby server and been applied.
    *########### Login drpdb1 as Oracle user #########*
    SQL> SELECT sequence#, first_time, next_time, applied
    FROM v$archived_log
    ORDER BY next_time ;
    *########### Login dcpdb1 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    TO STANDBY
    *########### Login dcpdb2 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    TO STANDBY
    *########### Login drpdb1 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    NOT ALLOWED
    *########### Login dcpdb2 as Oracle user #########*
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    *########### Login dcpdb1 as Oracle user #########*
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    Database altered.
    SQL>
    SQL> shutdown immediate
    ORA-01507: database not mounted
    ORACLE instance shut down.
    SQL>
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area 1.5400E+10 bytes
    Fixed Size 2184872 bytes
    Variable Size 7751076184 bytes
    Database Buffers 7616856064 bytes
    Redo Buffers 29409280 bytes
    Database mounted.
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    TO PRIMARY
    SQL>
    *########### Login drpdb1 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    SESSIONS ACTIVE
    SQL> alter database commit to switchover to primary with session shutdown;
    Database altered.
    SQL> shutdown immediate
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1.5400E+10 bytes
    Fixed Size 2184872 bytes
    Variable Size 7717521752 bytes
    Database Buffers 7650410496 bytes
    Redo Buffers 29409280 bytes
    Database mounted.
    Database opened.
    *########### Login dcpdb1 as Oracle user #########*
    SQL> alter database open read only;
    Database altered.
    SQL> alter database recover managed standby database using current logfile disconnect;
    Database altered.
    *########### Login dcpdb2 as Oracle user #########*
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area 1.5400E+10 bytes
    Fixed Size 2184872 bytes
    Variable Size 7751076184 bytes
    Database Buffers 7616856064 bytes
    Redo Buffers 29409280 bytes
    Database mounted.
    SQL> alter database open read only;
    Database altered.
    SQL> alter database recover managed standby database using current logfile disconnect;
    Database altered.
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    NOT ALLOWED
    SQL>
    *########### Login drpdb1 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    TO STANDBY
    *########### Login dcpdb1 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    NOT ALLOWED
    SQL>
    *########### Login dcpdb2 as Oracle user #########*
    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    SWITCHOVER_STATUS
    NOT ALLOWED
    SQL>
    *####################### Finish SwitchOver ########################*
    Check
    *########### Login drpdb1 as Oracle user #########*
    SQL> alter system switch logfile;
    SQL>
    SELECT sequence#, first_time, next_time
    FROM v$archived_log
    ORDER BY sequence#;
    SQL> archive log list
    *########### Login dcpdb1 as Oracle user #########*
    SQL>
    SELECT sequence#, first_time, next_time, applied
    FROM v$archived_log
    ORDER BY sequence#;
    SQL> archive log list
    *########### Login dcpdb2 as Oracle user #########*
    SQL>
    SELECT sequence#, first_time, next_time, applied
    FROM v$archived_log
    ORDER BY sequence#;
    SQL> archive log list
    Thanks
    Solaiman
    Edited by: 876149 on Apr 12, 2013 11:51 AM

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

  • Query performance slow WHY

    Its 11G R2 version, and query is performing very slow
    SELECT OBJSTATE
    FROM
    SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      140      0.00       0.00          0          0          0           0
    Execute 798747      8.34      14.01          0          4          0           0
    Fetch   798747     22.22      35.54          0    7987470          0      798747
    total   1597634     30.56      49.56          0    7987474          0      798747
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 51     (recursive depth: 1)
    Rows     Row Source Operation
          5  FILTER  (cr=50 pr=0 pw=0 time=239 us)
          5   NESTED LOOPS  (cr=40 pr=0 pw=0 time=164 us)
          5    NESTED LOOPS  (cr=30 pr=0 pw=0 time=117 us)
          5     TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
          5      INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
          5     TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
          5      INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
          5    INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
          5   INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
          5    FAST DUAL  (cr=0 pr=0 pw=0 time=4 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                              1        0.00          0.00
      gc cr block 2-way                               3        0.00          0.00
      gc current block 2-way                          1        0.00          0.00
      gc cr multi block request                       4        0.00          0.00 Edited by: 842638 on Feb 2, 2013 5:52 AM

    Hi Mark,
    Just have few basic doubts regarding the below query performance :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      140      0.00       0.00          0          0          0           0
    Execute 798747      8.34      14.01          0          4          0           0
    Fetch   798747     22.22      35.54          0    7987470          0      798747
    total   1597634     30.56      49.56          0    7987474          0      798747
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 51     (recursive depth: 1)
    Rows     Row Source Operation
           5  FILTER  (cr=50 pr=0 pw=0 time=239 us)
           5   NESTED LOOPS  (cr=40 pr=0 pw=0 time=164 us)
           5    NESTED LOOPS  (cr=30 pr=0 pw=0 time=117 us)
           5     TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
           5      INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
           5     TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
           5      INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
           5    INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
           5   INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
           5    FAST DUAL  (cr=0 pr=0 pw=0 time=4 us)
    Elapsed times include waiting on following events:
       Event waited on                             Times   Max. Wait  Total Waited
       ----------------------------------------   Waited  ----------  ------------
       library cache lock                              1        0.00          0.00
       gc cr block 2-way                               3        0.00          0.00
       gc current block 2-way                          1        0.00          0.00
       gc cr multi block request                       4        0.00          0.00
    1] How do you determine that this query performance is +ok+ ?
    2] What is the actual need of checking the query performance this way?
    3] Is this the TKPROF output?
    4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
    Could you please help me with this?
    Thanks.
    Ranit B.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Slow query performance in excel 2007 vs excel 2003

    Hi,
    Some of our clients recently did an upgrade towards BI 7.0 and also upgraded towards excel 2007.
    They experience lots of performance problems when using the Bex analyser in excel 2007.
    Refreshing queries, using 'simple' workbooks works till 10 times slower than before using excel 2003.
    Has anyons experienced the same?
    Any tips/tricks to solve that problem?
    With regards,
    Tom.

    Hello all,
    1) Please set the following parameters to X in transaction
        RS_FRONTEND_INIT and check the issue.
    Parameters to be set are
    ANA_USE_SIDGRIDDELTA
    ANA_USE_SIDGRIDMASS
    ANA_SINGLEDPREFRESH
    ANA_CACHE_WORKBOOK
    ANA_USE_OPTIMIZE_STG
    ANA_USE_TABLE
    2) Also refer to below KBA link which would help to resolve the issue.
       1570478 BW Report in Excel 2007 or Excel 2010 takes much more time than
    3) In the workbook properties please set the flag
         - Use Compression When Saving Workbook
    4)  If you are working with big hierarchies please try to improve
    performance with following setting directly in AnalysisGrid:
       - Properties of Analysis Grid - Dispay Hierarchy Icons
       - switch to "+/-"
    Regards,
    Arvind

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

Maybe you are looking for

  • How do I convert my DVD's to iPod

    How do I convert my DVD movies with Quicktime so I can watch them on my iPod? I don't see any options in QT, where is it buried? I'm not going to spend any money for the Pro version until I see how it works. Many Thanks

  • Inbound proxy from XI or PI

    > Is it possible that messages can be sent from XI or PI to the same SAP instance? Yes, you can send messages from different PI system to the same SAP system. This is valid for XI, RFC and IDoc adapter. Regards Stefan

  • WD Java vs WD ABAP

    Hi All,   I would like to know if we can add custom WD ABAP applications to the ESS/MSS portal which uses standard Java iviews? If this is possible then how can we transfer the data between a Java iview and a ABAP iview? Thank you, Reddy

  • I have downloaded the free trial but when I try to open Adobe Illustrator it just opens a web page with tutorials.

    Hello, I have successfully downloaded Creative Cloud and from there downloaded AI but when I try to open it all that happens is a webpage opens with three tutorials.   The program itself doesn't open. Any help would be much appreciated.  Many thanks.

  • Formatting External Harddrive. What software to use?

    I have an external harddrive, a ATA RAID drive, 600gb that I use for video editing. It is connected through a SCSI connection. Basically, it was having problems when I captured video, it would drop sync. So I am trying to format the harddrive to see