Exadata Smart Scan

Hi,
Can someone explain me what Exadata Smart Scan is?
How can I see if it is enabled or disabled?
How can I enable it or disable it?
When to use it and when not to use it.
Kind Regards,
Laurent Baylac

Hi,
Basically, SmartScan is an optimization for the transfers from storage cells to database servers. It concerns only direct-path reads (those not going through buffer cache), so it's limited to large reads of big tables (large tables not in buffer cache and/or parallel query).
Optimization includes:
- predicate offloading: some rows are filtered by the storage software
- projection offloading: only the needed columns are transferred
- join offloading: similar to predicate offloading, using a bloom filter to add predicates
- storage indexes: some i/o can be avoided when a memory map ensures that the required rows are not in some blocks
- decompression offloading: decompression is done by the storage cell in the goal to apply predicate/projection offloading
We can say that it has the flexibility and availability of a SAN without the transfer limitation.
Regards,
Franck.

Similar Messages

  • Exadata and smart scans

    Hi,
    I have an Oracle RDBMS 11gR2 that runs on Exadata.
    I have several processes that run with a full table scans with resonably "huge" tables (up to few GB).
    I would expect that a "smart scan" be used to "speed up the process".
    The runs are working for alreay hours.
    I get execution plans of that kind:
    | Id  | Operation                    | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                               
    |   0 | INSERT STATEMENT             |                           |       |       |  3532K(100)|          |                                                                                                                                               
    |   1 |  LOAD AS SELECT              |                           |       |       |            |          |                                                                                                                                               
    |   2 |   FILTER                     |                           |       |       |            |          |                                                                                                                                               
    |   3 |    HASH JOIN                 |                           | 24634 |   344M|  3532K  (1)| 15:42:08 |                                                                                                                                               
    |   4 |     TABLE ACCESS STORAGE FULL| DPV_97_130556316111576318 | 24634 |   625K|   400K  (2)| 01:46:42 |                                                                                                                                               
    |   5 |     TABLE ACCESS STORAGE FULL| DPV_96_130556316111576318 |   194K|  2724M|  3132K  (1)| 13:55:26 |                                                                                                                                               
    Can someone explainsme how I can check is smart scan is used, via a query?
    If not used, can someone explains me how to "enable" and under which conditons I should?
    Thanks by advance for any tips.
    Kind Regards

    HELLO,
    If the explain plan shows the storage clause in the plan then the query goes for smart scan.
    Query goes to smart scan when below conditions are met.
    1.Segment_size>_small_table_target(hidden init.ora parameter)
    2.DB Buffer Cache should not have more than 50% data blocks of the table.
    3.Dirty buffers in db_buffer_cache should be less than 25%.
    Smart scan is applied to queries which goes for full table scans,parlell queries,Index fast full scan.
    Hope this will be helpful.
    Regards,
    Thimmappa

  • Smart scan not working with Insert Select statements

    We have observed that smart scan is not working with insert select statements but works when select statements are execute alone.
    Can you please help us to explain this behavior?

    There is a specific exadata forum - you would do better to post the question there: Exadata
    I can't give you a definitive answer, but it's possible that this is simply a known limitation similar to the way that "Create table as select" won't run the select statement the same way as the basic select if it involves a distributed query.
    Regards
    Jonathan Lewis

  • Best Practises on SMART scans

    For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?

    We cover more in our book, but here are the key points:
    1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
    2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
    3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
    Ideal execution plan for taking advantage of smart scans for reporting would be:
    1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
    2) full scan the partitions (which allows smart scans to kick in)
    2.1) no index range scans (single block reads!) and ...
    3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
    3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
    So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
    Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
    Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
    And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
    Tanel Poder
    Blog - http://blog.tanelpoder.com
    Book - http://apress.com/book/view/9781430233923

  • HP Smart Scan Software 2.7 - No Profiles - ScanJet 8390

    I have a HP ScanJet 8390 and when I run HP Smart Scan 2.7 all my profiles are missing. I have removed and reloaded 2.7 three times with the same result. I have even attempted to manually import profiles from my wifes laptop (i.e. xml file) without any success. HELP

    Same problem on my scanjet 7000

  • HP Smart scan software and saving

    HP Smartscan v 2.70.000 on an HP Desktop and HP Scanjet 9000.
    When we scan; say 50 pages using the HP Smart Scanning Software - we can then see all the 50 pages on the screen in the preview section.  BUT  when we save the scanned document as a PDF it sometimes only saves some of the pages - not all of them.  This has only recently started to happen and it is not every time !   Help  Regards   Nigel

    Same problem on my scanjet 7000

  • Oracle Exadata, db_file_multiblock_read_count and sort_multiblock_read_count

    Hi,
    I have an Oracle RDBMS 11gR2 EE that uses ASM and Exadata.
    We have sone processes that read huge amount of data, "shuffle it" and insert it to tales. Totally, about 20% of the data is processed.
    The total size of the database is about 12 TB.
    Most of these processes perform a full table scans, using also the Exadata smart scans.
    Some ad-hoc indexes are created and can be used as well. All this through hints (because the data is shuffeled, statistics are not accurate anymore).
    As we consequence, we have quite noticeable and very long running processes (from hours to few days).
    It consumes also a significant amount of temporary tablespace.
    We would like to investigate if it is possible to "speed up" that whole process.
    I have found that the following parameter could be used to minimize disk I/O:
    db_file_multiblock_read_count
    But I have read pros et cons about it... so I am confused about using it properly.
    Additionally, there is also this parameters:
    sort_multiblock_read_count
    Do these parameters also apply when having these smart scans?
    If these parameters can improve throughput, how can I find out the size they should be?
    What are the advantages and disadvantages of using them?
    Thanks by advance for sharing your experence.
    Kind Regards.

    Hi Franck,
    Not all tables are compresses, and the index are used to access intermediate look-up tables.
    The content of the tables is practically "flushed out" as part of an anonimization process. So, at that stage statistics are not accurate anymore, and thus a hint is the only way to "force" a full scan (as the whole content of the tables need to be accessed). Is this is the right plan, that is another question that I cannot directly answer as the "logic" in the statements is not always the same. The full table scans with smart scans ranges from 60 to 90% which I think is quite good (although mu knowledge of Exadata is rather limited).
    I agree with you to not change the mentioned parameters.
    Kind Regards.

  • Exadata performance

    In our exachk results, there is one item for shared_server.
    our current production environment has shared_server set to 1. shared_server=1.
    Now I got those from exachk:
    Benefit / Impact:
    As an Oracle kernel design decision, shared servers are intended to perform quick transactions and therefore do not issue serial (non PQ) direct reads. Consequently, shared servers do not perform serial (non PQ) Exadata smart scans.
    The impact of verifying that shared servers are not doing serial full table scans is minimal. Modifying the shared server environment to avoid shared server serial full table scans varies by configuration and application behavior, so the impact cannot be estimated here.
    Risk:
    Shared servers doing serial full table scans in an Exadata environment lead to a performance impact due to the loss of Exadata smart scans.
    Action / Repair:
    To verify shared servers are not in use, execute the following SQL query as the "oracle" userid:
    SQL>  select NAME,value from v$parameter where name='shared_servers';
    The expected output is:
    NAME            VALUE
    shared_servers  0
    If the output is not "0", use the following command as the "oracle" userid with properly defined environment variables and check the output for "SHARED" configurations:
    $ORACLE_HOME/bin/lsnrctl service
    If shared servers are confirmed to be present, check for serial full table scans performed by them. If shared servers performing serial full table scans are found, the shared server environment and application behavior should be modified to favor the normal Oracle foreground processes so that serial direct reads and Exadata smart scans can be used.
    Oracle lsnrctl service on current production environments shows all 'Local Server'.
    What should I proceed here?
    Thanks again in advance.

    Thank you all for your help.
    Here is an output of lsnrctl service:
    $ORACLE_HOME/bin/lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 14-JUL-2014 14:15:24
    Copyright (c) 1991, 2013, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:1420 refused:0 state:ready
             LOCAL SERVER
    Service "PREME" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREMEXDB" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "D000" established:0 refused:0 current:0 max:1022 state:ready
             DISPATCHER <machine: prodremedy, pid: 16823>
             (ADDRESS=(PROTOCOL=tcp)(HOST=prodremedy)(PORT=61323))
    Service "PREME_ALL_USERS" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_TXT_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CORP_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_DISCO_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_EAST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM_WR" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_RPT" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_WEST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    The command completed successfully

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • 11g vs Exadata- SQL Tuning approach

    We are developing an OLTP application in which the development database is in 11g and Production database would be in Exadata. We have a testing database in Exadata. The plan is to develop queries and tune it in 11g development database. After this, the tuned query would be run in testing Exadata database to ensure it works as expected. The assumption is that further tuning would not be needed in Exadata. I'm trying to see how big is the risk in making this assumption.
    From a sql developer's perspective, how much different is the tuning approach between 11g & Exadata?
    Edited by: museshad on Jun 7, 2011 12:47 PM
    Edited by: museshad on Jun 7, 2011 12:51 PM

    On a recent migration of an OLTP application we did have things that we had to look at. They were almost exclusively related to 10g to 11g upgrade issues though, with one notable exception. Although the application was 99% OLTP, there was an ETL type process that was run very frequently which depended on full table scans. It was not run using parallel query, but rather had many copies being run simultaneously in a do it yourself parallel fashion. Because the tables were relatively small, the serial direct path read mechanism was not being selected which eliminated all the Exadata Smart Scan optimizations. Long story short, getting Smart Scans to work on that portion of the application made a huge difference. So the point of the story is that it's rare to have "pure" OLTP that never does any full table scans. These are the areas where you should be expecting different behavior and thus focusing your testing. On the OLTP side you will want to do the same things you have done with pre-Exadata Oracle including avoiding as much I/O as possible via the buffer cache. The other thing to watch is to make sure when you are doing single block I/O that is making use of the Exadata Smart Flash Cache. I must say that I think the developers should have access to the platform where the application will live. Otherwise I don't know how you can expect them to learn what's possible and what to expect from the platform.
    Kerry

  • New Exam - (1z0-027) - Oracle Exadata Database Machine X3 Administrator

    Hi Friends,
    Exadata Database Machine Overview
    Identify the benefits of using Database Machine for different application classes
    Describe the integration of the Database Machine with Oracle Database Clusterware and ASM
    Describe Exadata Storage Server and the different Database Machine configurations
    Describe the key capacity and performance specifications for Database Machine
    Describe the key benefits associated with Database Machine
    Exadata Database Machine Architecture
    Describe the Database Machine network architecture
    Describe the Database Machine software architecture
    Describe the Exadata Storage Server storage entities and their relationships
    Describe how multiple Database Machines can be interconnected
    Describe site planning requirements for Database Machine
    Describe network requirements for Database Machine
    Key Capabilities of Exadata Database Machine
    Describe the key capabilities of Exadata Database Machine
    Describe the Exadata Smart Scan capabilities
    Describe the capabilities of hybrid columnar compression
    Describe the capabilities and uses of the Smart Flash Cache
    Describe the capabilities of the Smart Flash Log
    Describe the purpose and benefits of Storage Indexes
    Describe the capabilities and uses of Exadata Secure Erase
    Exadata Database Machine Initial Configuration
    Describe the installation and configuration process for Database Machine
    Describe the default configuration for Database Machine
    Describe supported and unsupported customizations for Database Machine
    Describe database machine operating system options and configurations
    Configure Exadata Storage Server
    Configure Exadata software
    Create and configure ASM disk groups using Exadata
    Use the CellCLI Exadata administration tool
    Describe Exadata Storage Server security
    I/O Resource Management
    Use Exadata Storage Server I/O Resource Management to manage workloads within a database and across multiple databases
    Configure database resource management plans
    Configure category plans
    Configure inter-database plans
    Describe and configure the I/O resource manager objectives
    Monitor I/O using I/O Metrics
    Recommendations for Optimizing Database Performance
    Optimize database performance in conjunction with Exadata Database Machine
    Monitor and configure table indexes, accounting for the presence of Exadata
    Using Smart Scan
    Describe Smart Scan and the query processing that can be offloaded to Exadata Storage Server
    Describe the requirements for Smart Scan
    Describe the circumstances that prevent using Smart Scan
    Identify Smart Scan in SQL execution plan
    Use database statistics and wait events to confirm how queries are processed
    Consolidation Options and Recommendations
    Describe the options for consolidating multiple databases on Database Machine
    Describe the benefits and costs associated with different options
    Identify the most appropriate approach for consolidation in different circumstances
    Migrating Databases to Exadata Database Machine
    Describe the steps to migrate your database to Database Machine
    Explain the main approaches for migrating your database to Database Machine
    Identify the most appropriate approach for migration in different circumstances
    Identify the most appropriate storage configuration for different circumstances
    Bulk Data Loading using Oracle DBFS
    Use Oracle DBFS for bulk data loading into Database Machine
    Configure the Database File System (DBFS) feature for staging input data files
    Use external tables based on input data files stored in DBFS to perform high-performance data loads
    Exadata Database Machine Platform Monitoring
    Describe the purpose and uses of SNMP for the Database Machine
    Describe the purpose and uses of IPMI for the Database Machine
    Describe the purpose and uses of ILOM for the Database Machine
    Configuring Enterprise Manager Grid Control 11g to Monitor Exadata Database Machine
    Describe the Enterprise Manager Grid Control architecture as it specifically applies to Exadata Database Machine
    Describe the placement of agents, plug-ins and targets
    Describe the recommended configuration for high availability
    Describe the plug-ins associated with Exadata Database Machine and how they are configured
    Use setupem.sh
    Configure a dashboard for Exadata Database Machine
    Monitoring Exadata Storage Servers
    Describe Exadata Storage Server metrics, alerts and active requests
    Identify the recommended focus areas for Exadata Storage Server monitoring
    Monitor the recommended Exadata Storage Server focus areas
    Monitoring Exadata Database Machine Database Servers
    Describe the monitoring recommendations for Exadata Database Machine database servers
    Monitoring the InfiniBand Network
    Monitor InfiniBand switches
    Monitor InfiniBand switch ports
    Monitor InfiniBand ports on the database servers
    Monitor the InfiniBand subnet master location
    Monitor the InfiniBand network topology
    Monitoring other Exadata Database Machine Components
    Monitor Exadata Database Machine components: Cisco Catalyst Ethernet Switch, Sun Power Distribution Units, Avocent MergePoint Unity KVM Switch
    Monitoring Tools
    Use monitoring tools: Exachk, DiagTools, ADRCI, Imageinfo and Imagehistory, OSWatcher
    Backup and Recovery
    Describe how RMAN backups are optimized using Exadata Storage Server
    Describe the recommended approaches for disk-based and tape-based backups of databases on Database Machine
    Describe the recommended best practices for backup and recovery on Database Machine
    Perform backup and recovery
    Connect a media server to the Database Machine InfiniBand network
    Database Machine Maintenance tasks
    Power Database Machine on and off
    Safely shut down a single Exadata Storage Server
    Replace a damaged physical disk on a cell
    Replace a damaged flash card on a cell
    Move all disks from one cell to another
    Use the Exadata cell software rescue procedure
    Patching Exadata Database Machine
    Describe how software is maintained on different Database Machine components
    Locate recommended patches for Database Machine
    Describe the recommended patching process for Database Machine
    Describe the characteristics of an effective test system
    Database Machine Automated Support Ecosystem
    Describe the Auto Service Request (ASR) function and how it relates to Exadata Database Machine
    Describe the implementation requirements for ASR
    Describe the ASR configuration process
    Describe Oracle Configuration Manager (OCM) and how it relates to Exadata Database Machine
    Quality of Service Management
    Describe the purpose of Oracle Database Quality of Service (QoS) Management
    Describe the benefits of using Oracle Database QoS Management
    Describe the components of Oracle Database QoS Management
    Describe the operations of Oracle Database QoS Management
    Thanks
    LaserSoft

    Here's the source document from Oracle Education with the exam details: http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&get_params=p_exam_id:1Z0-027&p_org_id=&lang=
    This is the non-partner equivalent of "Oracle 11g Essentials" (1Z0-536 http://www.oracle.com/partners/en/knowledge-zone/database/1z1-536-exam-page-169969.html) that has existed under various names since 2010, but with additional content relevant to new features like flash logging, QoS management and ASR.
    Marc

  • Exadata for OLTP

    I was reading a sbout exadata and am confused if it provides any for OLTP databases where only minimal rows are retrieved using indexes there by making exadata smart scan and storage indexes useless.The advantage I can think of is high speed flash cache and flash logging features.
    But can't this be obtained by using any other high speed machines and high speed disks like SSD's used as database flash(11g feature).Can you shed some light on this topic.
    Thanks
    sekar

    Hi,
    migrating to Exadata could be beneficial for an OLTP system: you could fit the entire database up to 22 Tb into the Exadata Smart Flash Cache, and have other nice things like Infiniband, smart scans (which could be useful for OLTPs as well), HCC compression etc.
    It's just that it won't be as beneficial as for DSS or mixed systems, and it would cost a lot. I think that if you don't have an analytic component on the top of your OLTP, and if you don't require things like High Availability etc. then you may be better off with a regular Oracle 12c database on SSD storage.
    But these are just very basic considerations, details depend on your requirements. You will need to sit down and calculate costs for different options, then compare them.
    I would also recommend to review the database thoroughly -- it could be possible to achieve required performance by tuning, not by hardware upgrades. You could save your company hundreds of thousands of dollars if you do that.
    Best regards,
      Nikolay

  • Pin redo logs into smart flash cache (Exadata Flash PCI F20 cache)

    Need to know if we could pin redo logs into smart flash cache.
    For example to pin table..
    we use Alter table dhaval storage (cell_flash_cache keep) -----
    Similarly can we pin redo logs into flash cache ?
    If not, what is the alternative to put redo logs into flash cache ?

    At Oracle OpenWorld the Exadata Smart Flash Log feature was announced. The Smart Flash Log feature requires Exadata Storage 11.2.2.4.0 or later, and Databases version 11.2.0.2 Bundle Patch 11 or greater. This feature allows a modest amount of flash to be used as a secondary write place. It wites redo to both flash and disk and returns the call to the db for the first one that finishes. By doing so it improves user transaction response time, and increases overall database throughput for IO intensive workloads.
    Regards,
    Greg Rahn
    http://structureddata.org

  • SQL Tuning for Exadata

    Hi,
    I would like to know any SQL tuning methods specific to Oracle exadata so that they could improve the performance of the database?
    I am aware that oracle exadata runs with Oracle 11g, but i would like to know wheather there is any tuning scope w.r.t to SQL's on exadata?
    regards
    sunil

    Well there are some things that are very different about Exadata. All the standard Oracle SQL tuning you have learned already should not be forgotten as Exadata is running standard 11g database code, but there are many optimizations that have been added that you should be aware of. At a high level, if you are doing OLTP type work you should be trying to make sure that you take advantage of Exadata Smart Flash Cache which will significantly speed up your small I/O's. But long running queries are where the big benefits show up. The high level tuning approach for them is as follows:
    1. Check to see if you are getting Smart Scans.
    2. If you aren't, fix what ever is preventing them from being used.
    We've been involved in somewhere between 25-30 DB Machine installations now and in many cases, a little bit of effort changes performance dramatically. If you are only getting 2 to 3X improvement over your previous platform on these long running queries you are probably not getting the full benefit of the Exadata optimizations. So the first step is learning how to determine if you are getting Smart Scans or not and on what portions of the statement. Wait events, session statistics, V$SQL, SQL Monitoring are all viable tools that can show you that information.

  • How to verify that a host is having/running Exadata?

    Hi,
    How can I verify that a machine(unix/linux) has Exadata?
    Please help.
    Thanks

    It's the storage that's important. You can run a database on an Exadata DB servers that doesn't access Exadata storage, in which case Smart Scans etc... will be disabled. So you may want to check your asm diskgroups. They have an attribute that tells whether they reside on Exadata storage or not. You can use something like this query to show you that information.
    <pre>
    column exadata_storage for a20
    with b as (select group_number, value from v$asm_attribute where name = 'cell.smart_scan_capable')
    select a.name diskgroup, state, b.value Exadata_storage
    from v$asm_diskgroup a, b
    where a.group_number = b.group_number(+)
    and a.name like nvl('&diskgroup',a.name)
    order by 1
    SYS@SANDBOX> @exadata_diskgroups.sql
    Enter value for diskgroup:
    DISKGROUP STATE EXADATA_STORAGE
    DATA CONNECTED TRUE
    RECO CONNECTED TRUE
    SCRATCH MOUNTED TRUE
    SMITHERS DISMOUNTED
    STAGE MOUNTED TRUE
    SWING MOUNTED TRUE
    SYSTEM MOUNTED TRUE
    7 rows selected.
    </pre>

Maybe you are looking for