Exadata performance

In our exachk results, there is one item for shared_server.
our current production environment has shared_server set to 1. shared_server=1.
Now I got those from exachk:
Benefit / Impact:
As an Oracle kernel design decision, shared servers are intended to perform quick transactions and therefore do not issue serial (non PQ) direct reads. Consequently, shared servers do not perform serial (non PQ) Exadata smart scans.
The impact of verifying that shared servers are not doing serial full table scans is minimal. Modifying the shared server environment to avoid shared server serial full table scans varies by configuration and application behavior, so the impact cannot be estimated here.
Risk:
Shared servers doing serial full table scans in an Exadata environment lead to a performance impact due to the loss of Exadata smart scans.
Action / Repair:
To verify shared servers are not in use, execute the following SQL query as the "oracle" userid:
SQL>  select NAME,value from v$parameter where name='shared_servers';
The expected output is:
NAME            VALUE
shared_servers  0
If the output is not "0", use the following command as the "oracle" userid with properly defined environment variables and check the output for "SHARED" configurations:
$ORACLE_HOME/bin/lsnrctl service
If shared servers are confirmed to be present, check for serial full table scans performed by them. If shared servers performing serial full table scans are found, the shared server environment and application behavior should be modified to favor the normal Oracle foreground processes so that serial direct reads and Exadata smart scans can be used.
Oracle lsnrctl service on current production environments shows all 'Local Server'.
What should I proceed here?
Thanks again in advance.

Thank you all for your help.
Here is an output of lsnrctl service:
$ORACLE_HOME/bin/lsnrctl service
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 14-JUL-2014 14:15:24
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:1420 refused:0 state:ready
         LOCAL SERVER
Service "PREME" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREMEXDB" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER <machine: prodremedy, pid: 16823>
         (ADDRESS=(PROTOCOL=tcp)(HOST=prodremedy)(PORT=61323))
Service "PREME_ALL_USERS" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_TXT_APP" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_CORP_APP" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_DISCO_APP" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_EAST_APP" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_CRM" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_CRM_WR" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_RPT" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
Service "PREME_WEST_APP" has 1 instance(s).
  Instance "PREME2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:627130 refused:3 state:ready
         LOCAL SERVER
The command completed successfully

Similar Messages

  • OLS on Exadata Performance Issues

    Before people jump on me about why this question on this forum .. The problem is only found in Exadata
    I have a table on which I have OLS setting on a column. As the owner of the table, I update the values on this column to the new values as per the OLS label_tag value set in Exadata. The query is
    update table1 t1
    set t1.column1 =
    (select t2.label_tag from table2 t2
       where t2.column1 = t1.column1)
    /if the OLS is enabled while douing this operation, the update goes to never never land. Without OLS enabled, the update completes in a few minutes. There are about 300 million rows.

    Hi,
    there is a known issue with Exadata not being able to offload OLS predicates
    see the thread below and links therein:
    Re: Smart Scans and Oracle Label Security?
    Best regards,
    Nikolay

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • Exadata Architecture & I/O monitor tool - ExadataViewer

    Hi Experts,
    During the work, I got some knowledge and experiences about Exadata. And I found that we really need a tool to monitor Exadata performance and workflow, such as smart scan offload processing statistics and I/O dataflow path in Exadata. So, I developed ExadataViewer in my free time after work.
    ExadataViewer is a Exadata performance monitoring tool. ExadataViewer can help you to understand Exadata architecture and observe smart scan offload statistics and physical I/O dataflow in a graphical view.
    I hope this little tool useful to you. You can download it from http://www.exadataviewer.com
    Screen Snapshot:
    http://www.exadataviewer.com/wp-content/uploads/2013/05/exadata_smart_scan_demo.png
    Demo Movie:
    http://www.exadataviewer.com/?dl_name=exadata_demo_movie(www.exadataviewer.com).wmv
    Download ExadataViewer:
    http://www.exadataviewer.com/index.php/category/download/

    Thank you for your time and efforts Qing, but we (Oracle) already provide a very detailed reporting tool in the form of Enterprise Manger 12c. It can be tuned and refined to provide a basically limitless range of statistics, hopefully you'll be able to download it and use it.
    Regards,
    Dan

  • Performance of ETL loads on Exadata

    Oracle advertises prominently the improvements on query performance (10-100x), but does anyone know if the performance of the data loads into the DW (ETL) will improve also?

    Brad_Peek wrote:
    In our case there are many Informatica sessions where the majority of time is spent inside the database. Fortnately, Informatica sessions produce a summary at the bottom of each session log that breaks down where the time was spent.
    We are interested to find out how much improvement Exadata will provide from the following types of Informatica workloads:
    1) Batch inserts into a very large target table.
    -- We have found that inserts into large tables (e.g. 700 million rows plus) with high-cardinality indexes can be quite slow.
    -- Slowest when the ix is either non-partitioned or globally partitioned.
    -- Hoping that flash cache will improve the random IO associated with ix maintenance.
    -- In this case, Informatica just happens to be the program issuing the inserts. We have the same issue with batch inserts from any program.
    -- Note that Informatica can do direct-mode inserts, but even for normal inserts it does "array inserts". Just a bit of trivia.
    2) Batch updates to a large table by primary key where the updated key values are widely dispersed over the target table.
    -- Again, this leads to a large amount of small-block physical IO.
    -- We see a large improvement in elapsed time when we can order the updates to match the <A class=bodylinkwhite href="http://www.software-to-convert.com/avi-dvd-conversion-software/avi-dvd-to-matroska-software.html"><FONT face=tahoma,verdana,sans-serif color=#000 size=1>software</FONT></A> order of the rows in the table, but that isn't always possible.
    Thanks for your sharing! I understand this part, It's helpful to me, Nice writing.

  • 11gR2 Vs Exadata- Flash Cache performance

    Flash Cache feature is available in both 11gR2 & Exadata. The difference being in 11gR2 it is an extension of the database buffer, however in Exadata it is a separate hardware component in the storage server.
    Apart from this, I would like to know how the Exadata Smart Flash Cache is superior to the 11gR2 flash cache? Is it that the 11gR2 flash cache is not "smart" and ends up caching data that is not useful in terms of caching.
    The 11gR2 oracle documentation does not seem to mention anything about this flash cache feature. Any idea?
    Edited by: museshad on Jun 8, 2011 2:13 PM

    Exadata Smart Flash Cache and Database Smart Flash Cache use SSD advantages.
    On Exadata, We don't need to change anything in Database (database machine). We just "create flashcache" on cell. and Exadata Environment not support 11gR2 feature (Database Smart Flash Cache), Because it has Exadata Smart Flash Cache.
    http://surachartopun.com/2011/02/how-to-use-exadata-smart-flash-cache.html
    On 11gR2 (none Exadata) and have flash disk, we can use Database Smart Flash Cache, We have to change 2 initialization parameters on database.
    db_flash_cache_file
    db_flash_cache_size
    http://surachartopun.com/2010/02/oracle-11gr2-flash-cache.html
    Flash Cache feature is available in both 11gR2 & Exadata. On Exadata, we use Exadata SMart Flash Cache.
    On None Exadata, we use database smart flash cache.
    *** Exadata Environment not support 11gR2 feature (Database Smart Flash Cache) ***
    The difference being in 11gR2 it is an extension of the database buffer, however in Exadata it is a separate hardware component in the storage server.when we get data, we read from disks to database buffer, when blocks are evicted from database buffers, they are stored in flash cache. After that ... someone get data again -> oracle read in database buffer cache, if not found, then oracle read in flash cache(if not found, oracle read from disks)
    read more:
    http://www.oracle.com/technetwork/articles/systems-hardware-architecture/oracle-db-smart-flash-cache-175588.pdf
    http://www.oracle.com/technetwork/middleware/bi-foundation/exadata-smart-flash-cache-twp-v5-1-128560.pdf
    Apart from this, I would like to know how the Exadata Smart Flash Cache is superior to the 11gR2 flash cache? Is it that the 11gR2 flash cache is not "smart" and ends up caching data that is not useful in terms of caching.read 2 above whitepapers.

  • Is there a way to create different diskgroups in exadata?

    We have a need to have some other diskgroups other than +DATA and +RECO.
    How do we do that? Exadata version is x3.
    Thanks

    user569151 -
    As 1188454 states this can be done. I would first ask why is it you need to create additional disk groups than the data, reco and dbfs disk group created by default? I often see Exadata users question the default disk groups and want to add more or change the disk groups to follow what they've previously done on non-Exadata RAC/ASM environments. However, usually the data and reco disk groups are sufficient and allow for the best flexibility, growth and performance. One reason to create multiple disk groups could be for wanting to have different two different redundancy options for a data disk group - to have a prod database on high redundancy and a test database on normal redundancy for example; but there aren't a lot of needs to change it.
    To add disk groups you will need to also re-organize and add new grid disks. You should keep the grid disk prefix and corresponding disk group names equivalent. Keep in mind that all of the Exadata storage is allocated to the existing grid disks and disk groups - and this is needed to keep the necessary balanced configuration and maximize performance. So adding and resizing the grid disks and disk groups is not a trivial task if you already have running DB environments on the Exadata, and especially if you do not have sufficient free space in Data and Reco to allow dropping all the grid disks in a failgroup - because that would require removing data before doing the addition and resize of the grid disks. I've also encountered problems with resizing grid disks that end up forcing you to move data off the disks - even if you think you have enough space to aloo dropping an entire fail group.
    Be sure to accurately estimate the size of the disk groups - factoring in the redundancy, fail groups and reserving space to handle cell failure - as well as the anticipated growth of data on the disk groups. Because if you run out of space in a disk group you will need to either go through the process again of resizing all the grid disks and disk groups accordingly - or purchase an Exadata storage expansion or additional Exadata racks. This is one of the reasons why it is often best to stick with just the existing Data and Reco.
    To add new grid disks and disk groups and resize the others become very familiar with the information in and follow the steps given in the the "Resizing Storage Griddisks" section of Ch. 7 of the Exadata Machine Owner's guide as well as the information and examples in MOS Note: "Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)". I also often like to refer to MOS note "How to Add Exadata Storage Servers Using 3 TB Disks to an Existing Database Machine (Doc ID 1476336.1)" when doing grid disk addition or resize operations. The use case may not match but many steps given in this note are helpful as is discusses adding new grid disks and even discusses creating a new disk group for occasions when you have cell disks of different sizes.
    Also, be sure you stay true to the Exadata best practices for the storage as documented in "Oracle Exadata Best Practices (Doc ID 757552.1)". For example, the total number of griddisks per storage server for a given prefix name (e.g: DATA) should match across all storage servers where the given prefix name exists. Also, to maximize performance you should have each grid disk prefix, and corresponding disk group, spread evenly across each storage cell. You'll also want to maintain the fail group integrity, separating fail groups by storage cell allowing the loss of cell without losing data.
    Hope that helps. Good luck!
    - Kasey

  • Database feature Derived Table nad performance

    WE recently migrated our Data warehouse from DB2 to Oracle Exadata. Since the migration, i have noticed that some of the reports have becone extremely slow. e.g the report was runnign for 7 secs before is now running for over 6 minutes. In the database feature tab of the physical layer for the database connection, I have the database feature Derived Table turned on. If I turn this setting off, the same report runs in 3 secs. The Derived Table is supposed to help with the performance of the reports. but in this case it seems like it is hurting the performance than helping.
    Should this setting be turned off? What are the side effects of turning this off? We can not do a full testing by changing this setting so i want to reach out to someone who had run into similar issues and what they did to remedy.
    Any help will be appreciated.
    Thanks!

    So the answer is "yes" but not quite in the way you might expect.
    You have created the object where you can "borrow" the LOV for your derived table prompt. What you need to do is this. First, you need to create another object in your universe (I put it in the same class as the "code" object) that contains your object description. Then do this:
    1. Double-click on the code object
    2. Select the properties tab
    3. Click on the Edit button. This does not edit the object definition itself, it edits the LOV definition.
    4. On the query panel, add the Description object created earlier. Make sure it is the second object in the query panel.
    5. You can opt to sort either by code or description, whichever makes sense to your users
    6. Click "OK" to save the query definition, or click "Run" if you want to populate the LOV with values.
    7. Make sure you click "Export with Universe" on the properties tab once you have customized the LOV, else your computer is the only one that will include the new LOV definition
    8. The Hierarchical Display box may also be checked; for this case you have a code + description which are not hierarchical, so clear that box
    That's it. When you export your universe, the LOV will go with it. When someone asks for a list of values for the code, the list will show both codes and descriptions, but only the code will be selected.
    You do not need to make any changes to your current derived table prompt once the LOV has been customized.

  • SQL Tuning for Exadata

    Hi,
    I would like to know any SQL tuning methods specific to Oracle exadata so that they could improve the performance of the database?
    I am aware that oracle exadata runs with Oracle 11g, but i would like to know wheather there is any tuning scope w.r.t to SQL's on exadata?
    regards
    sunil

    Well there are some things that are very different about Exadata. All the standard Oracle SQL tuning you have learned already should not be forgotten as Exadata is running standard 11g database code, but there are many optimizations that have been added that you should be aware of. At a high level, if you are doing OLTP type work you should be trying to make sure that you take advantage of Exadata Smart Flash Cache which will significantly speed up your small I/O's. But long running queries are where the big benefits show up. The high level tuning approach for them is as follows:
    1. Check to see if you are getting Smart Scans.
    2. If you aren't, fix what ever is preventing them from being used.
    We've been involved in somewhere between 25-30 DB Machine installations now and in many cases, a little bit of effort changes performance dramatically. If you are only getting 2 to 3X improvement over your previous platform on these long running queries you are probably not getting the full benefit of the Exadata optimizations. So the first step is learning how to determine if you are getting Smart Scans or not and on what portions of the statement. Wait events, session statistics, V$SQL, SQL Monitoring are all viable tools that can show you that information.

  • Slow query results for simple select statement on Exadata

    I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
    I ran an execution plan it returns just this:
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576       
         1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576  I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
    The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
    Edited by: k1ng87 on Apr 24, 2013 7:58 AM

    k1ng87 wrote:
    I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
    I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
    and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
    Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
    I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?)

  • Standby DB running on different hardware if production on Exadata v2

    We where looking to by Exadata v2 Database Machine, but what is stopping us are 2 things.
    As per Oracle:
    1) It is impossible to connect existing Fiber storage to Exadata V2 to offload archived data from primary storage.
    2) Standby database can't be built on different hardware (same OS and same DB version) except exadata V2 Database Machine.
    We, probably, could survive with #1 limitation, but buying second Exadata V2 database machine is too much, especially for DR side.
    Is any one has experience with this problems, or knows some Doc to answer this 2 questions.
    Thanks in advance.

    Yes it is true if you are not using Exadata hybrid columnar compression features.
    "With Oracle Database 11g Release 2, the Exadata Storage Servers in the Sun Oracle Database Machine also enable new hybrid columnar compression technology that provides up to a 10 times compression ratio, with corresponding improvements in query performance. And, for pure historical data, a new archival level of hybrid columnar compression can be used that provides up to 50 times compression ratios."
    When you enable this feature, you can't build standby database on different hardware. It won't work.
    I am still researching what else could be a stopper or I could say, which other Exadata V2/11gR2 features I should avoid to have standby database working on non Exadata V2 hardware?

  • Oracle Exadata, db_file_multiblock_read_count and sort_multiblock_read_count

    Hi,
    I have an Oracle RDBMS 11gR2 EE that uses ASM and Exadata.
    We have sone processes that read huge amount of data, "shuffle it" and insert it to tales. Totally, about 20% of the data is processed.
    The total size of the database is about 12 TB.
    Most of these processes perform a full table scans, using also the Exadata smart scans.
    Some ad-hoc indexes are created and can be used as well. All this through hints (because the data is shuffeled, statistics are not accurate anymore).
    As we consequence, we have quite noticeable and very long running processes (from hours to few days).
    It consumes also a significant amount of temporary tablespace.
    We would like to investigate if it is possible to "speed up" that whole process.
    I have found that the following parameter could be used to minimize disk I/O:
    db_file_multiblock_read_count
    But I have read pros et cons about it... so I am confused about using it properly.
    Additionally, there is also this parameters:
    sort_multiblock_read_count
    Do these parameters also apply when having these smart scans?
    If these parameters can improve throughput, how can I find out the size they should be?
    What are the advantages and disadvantages of using them?
    Thanks by advance for sharing your experence.
    Kind Regards.

    Hi Franck,
    Not all tables are compresses, and the index are used to access intermediate look-up tables.
    The content of the tables is practically "flushed out" as part of an anonimization process. So, at that stage statistics are not accurate anymore, and thus a hint is the only way to "force" a full scan (as the whole content of the tables need to be accessed). Is this is the right plan, that is another question that I cannot directly answer as the "logic" in the statements is not always the same. The full table scans with smart scans ranges from 60 to 90% which I think is quite good (although mu knowledge of Exadata is rather limited).
    I agree with you to not change the mentioned parameters.
    Kind Regards.

  • Need help with performance & memory tuning in a data warehousing environment

    Dear All,
    Good Day.
    We had successfully migrated from a 4 node half-rack V2 Exadata to a 2 node quarter rack X4-2 Exadata. However, we are facing some issues with performance only for few loads while others have in fact shown good improvement.
    1. The total memory on the OS is 250GB for each node (two compute nodes for a quarter rack).
    2. Would be grateful if someone could help me with the best equation to calculate the SGA and PGA ( and also in allocation of shared_pool, large_pool etc) or whether Automatic Memory Management is advisable?
    3. We had run exachk report which suggested us to configure huge pages.
    4. When we tried to increase the SGA to more than 30GB the system doesn't allow us to do so. We had however set the PGA to 85GB.
    5. Also, we had observed that some of the queries involving joins and indexes are taking longer time.
    Any advise would be greatly appreciated.
    Warm Regards,
    Vikram.

    Hi Vikram,
    There is no formula about SGA and PGA, but the best practices for OLTP environments is for a give ammount of memory (which cannot be up to 80% of total RAM from server) you should make 80% to SGA and 20% to PGA. For Data Warehouse envs, the values are like 60% SGA and 40% PGA or it can be up to 50%-50%. Also, some docs disencourage you to keep the database in Automatic Memory Management when you are using a big SGA (> 10G).
    As you are using a RAC environment, you should configure Huge Pages. And if the systems are not allowing you to increase memory, just take a look at the semaphore parameters, probably they are set lower values. And for the poor performance queries, we need to see explain plans, table structure and you would also analyze if smart scan is playing the game.
    Regards.

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

  • Typical metric thresholds and patterns for monitoring Exadata

    I’m looking for any best practices or a list of recommended settings for the following:
    .- Metric Threshold settings to manage Exadata with OEM12c.
    .- List of main and/or typical metrics used for setting up alerts in OEM12c for Exadata.
    Thanks in advance,
    Carlos.

    Hello Ravi,
    This is a 10.2.0.4 (4nodes) Rac on Linux.
    This a alert text:
    Host=WEUSRV011.intrum.net
    Target type=Database Instance
    Target name=ie_colldesk_iecolld1
    Categories=Performance
    Message=Metrics "Global Cache Average Current Get Time" is at 0.615
    Severity=Warning
    Event reported time=Feb 25, 2013 9:44:05 PM CET
    Target Lifecycle Status=Production
    Comment=WEU Oracle Production Hardware
    Operating System=Linux
    Platform=x86_64
    Event Type=Metric Alert
    Event name=rac_global_cache:currentgets_cs
    Metric Group=Global Cache Statistics
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Metric value=0.615384615384615
    Key Value=SYSTEM
    Rule Name=Locks_Rule,rule 96
    Rule Owner=A_GUTIERREZ
    Update Details:
    Metrics "Global Cache Average Current Get Time" is at 0.615
    And
    Host=tstcolldesk01.intrum.net
    Target type=Database Instance
    Target name=COLLDESK_COLLDESK1
    Categories=Performance
    Message=Metrics "Global Cache Average Current Get Time" is at 0.632
    Severity=Warning
    Event reported time=Feb 25, 2013 9:03:00 PM CET
    Comment=WEU Oracle test Environment
    Operating System=Linux
    Platform=x86_64
    Event Type=Metric Alert
    Event name=rac_global_cache:currentgets_cs
    Metric Group=Global Cache Statistics
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Metric value=0.631578947368421
    Key Value=SYSTEM
    Rule Name=Locks_Rule,rule 96
    Rule Owner=A_GUTIERREZ
    Update Details:
    Metrics "Global Cache Average Current Get Time" is at 0.632
    The metrics definition is:
    Global Cache Average Current Block Request Time (centi-seconds)
    Global Cache Average CR Block Request Time (centi-seconds)
    And the metrics values defined at template level are:
    Warning Threshold 1.2
    Critical Threshold 3
    Comparison Operator >
    Occurrences Before Alert 3
    Corrective Actions None
    I need to explore select * from dba_thresholds.
    Thanks
    Best regards
    Arturo

Maybe you are looking for

  • Brush function not working correctly PS Elements 11 for Mac

    The brush is not working correctly in that I cannot drag the brush.  It allows only spot healing, not dragging the brush.  Didn't use to have this problem.  Now do.  Not sure what has changed.  Thank you!

  • Flash Game FPS drop in HTML

    I'm developing a game in Flash, and it runs fine in the Flash Player. When I open it using the .html file though, the FPS drops drastically. The FPS for the .fla is set at 30, but my FPS counter in the game rarely goes over 24 (tested on multiple com

  • Migrating the CA from 2008 to 2012 PKI ?

    Hi All, I am using the HSM in my PKI environment and i performing the migration of CA from 2008 PKI to 2012 PKI. What i noticed is if the private key of CA is protected by OCS card then i don't see the CA certificate while running the  Microsoft wiza

  • How to recover purchased applications

    I purchased several applications like pages, numbers, but my iPad was frozen.  When I reset the IPad my applications are lost.  How to recover them now?  Thank you.

  • Vauluated to Non valuated

    Dear All, I need to make my material frm valuated to non valuated. how can i do that?? please tell me full process. 2. can i Deactivate Quality View if its is activated for same material. I will be highly thankfull to you. Regards, VIshal