Exadata and OLTP

hello experts,
in our environment, OLTP databases (10g,11g) are on single instance mode and we are planning to do a feasibility analysis on moving to Exadata
1) as per exadata related articles, exadata can provide better OLTP performance with flash cache.
if we can allocate enough SGA as per the application workload then what is the meaning in moving in exadata?
2) any other performance benefits for OLTP databases?
3) since exadata is pre-configured RAC, will it be a problem for non-RAC databases which are not tested on RAC
in general, how can we conduct an effective feasibility analsysis for moving non RAC OLTP databases to exadata
thanks,
charles

Hi,
1.Flash cache is one of the advantage in Exadata, to speed up your sql statement processing.Bear in ind that it s done on the storage level and it should not be compared directly with a non Exadata machine.
2.As far as I know, besides faster query elapsed, we can also benefit from compression (hybrid columnar compression - Exadata spesific)
and also as the storage is located inside the Exadata machine, thus will decrease the I/O factor of your database perfromance.
3.you can have a single node database in Exadata.just set the connection to directly used the phyisical ip, instead of using scanip (11g) for RAC.
I think the best thing to access, is project the improvement and cost saving if you are going to migrate to Exadata.Access the processing improvement you will gain, the storage used and also the license cost.usually,most shops used Exadata to consolidate their different physical db boxes
br,
mrak

Similar Messages

  • Exadata for OLTP

    I was reading a sbout exadata and am confused if it provides any for OLTP databases where only minimal rows are retrieved using indexes there by making exadata smart scan and storage indexes useless.The advantage I can think of is high speed flash cache and flash logging features.
    But can't this be obtained by using any other high speed machines and high speed disks like SSD's used as database flash(11g feature).Can you shed some light on this topic.
    Thanks
    sekar

    Hi,
    migrating to Exadata could be beneficial for an OLTP system: you could fit the entire database up to 22 Tb into the Exadata Smart Flash Cache, and have other nice things like Infiniband, smart scans (which could be useful for OLTPs as well), HCC compression etc.
    It's just that it won't be as beneficial as for DSS or mixed systems, and it would cost a lot. I think that if you don't have an analytic component on the top of your OLTP, and if you don't require things like High Availability etc. then you may be better off with a regular Oracle 12c database on SSD storage.
    But these are just very basic considerations, details depend on your requirements. You will need to sit down and calculate costs for different options, then compare them.
    I would also recommend to review the database thoroughly -- it could be possible to achieve required performance by tuning, not by hardware upgrades. You could save your company hundreds of thousands of dollars if you do that.
    Best regards,
      Nikolay

  • Comparing data b/w biw and oltp

    hi
    Could some one can halp me how to compare data between biw and oltp system.which has been extracted.
    Thank for your precious time spent to help me in learning.
    Thanks,
    Jagadeesh

    Jagadish,
    It is not always all extracted data are coming from one standard table and comparison is easy. most of the time from multiple tables. There are couple of things you can do.
    1. If you know any R3 functionals you can inquire about reports available that produce the same kind of column groupings.
    2. If no one exists or do not get much help from them, and if you have SQ01 access in R3 you can create a query to bring the same columns fromcorresponding tables (This is a harder because you need access and need to findout from where data is coming from though info is available in help.sap.com).
    3. more practical would be select a small subset of data, with right filters in your selection columns of the info package and extract data, at the same time use the same selection in RSA3 (R3 side) to extract data and compare. This should tell you whether what you have in R3 is matching with the extracted data.
    4. After your transformations you can compare the same with your info provider (cube or ODS) contents as well.
    hope this helps,
    Award points if useful.
    Alex (Arthur Samson)

  • What's the Difference Between OLAP and OLTP?

    HI,
    What's the difference between OLAP and OLTP ? and which one is Best?
    -Arun.M.D

    Hi,
       The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
      Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.

  • Exadata and Oracle Berkerly DB

    Hello
    What are key differences between Berkerly DB and Exadata from application and commercial usage stand point?
    Other than the well known facts that Exadata is the child out of Oracle-Sun's marriage and it is a competing product against Teradata, and Berkerly DB being an open source DB and exadata is way more expensive:-
    From Application and commercial perspective, which is the best option out of Exadata and Berkerly DB?
    Is there any comparative analysis that may be available out there, then it would be helpful. I tried googlingwithout any luck.
    Thank you for your time in reading this post.
    -R

    Both are very opposite. Berkely DB is the smallest foot print database especially for mobile devices and embedded databases. Where as EXADATA is just opposite and high end db as told by you. We cannot compare both. For application which are for mobile devices and small web application you can use berkely db.

  • Exadata and 12c

    Hello Gurus,
    Could anyone share whats the difference between Exadata and Oracle 12c database? As I searched, it's showing only 12c OEM that supports exadata environment but there is no such 12c database.
    Please correct me if I misunderstood.
    Thanks,
    Amit.

    Amit_P wrote:
    Hello Gurus,
    Could anyone share whats the difference between Exadata and Oracle 12c database? As I searched, it's showing only 12c OEM that supports exadata environment but there is no such 12c database.
    Please correct me if I misunderstood.
    Thanks,
    Amit.Oracle 12c has not been released; in other words it is only vapor-ware now.

  • Exadata and DataGuard

    Hi everyone,
    I have a doubt with Exadata and DG. Suppose a table with different types of compression. This is, a range partitioned table with the first N partitions compressed with HCC. From N+1 to N+M the table has been partitioned with native 11.2 compression and for the last partitions with no compression. The environment consists of a primary Exadata database and a standby non-Exadata database. In case of switchover/failover, when accessing to compressed data with HCC, I should execute an alter table ... move nocompress. My question is, Could I execute the command only for partitions with HCC compression?. Really only the first partitions are compressed with such criteria and it should not be necessary a lock on the whole table, only for the first partitions.
    Best regards.

    In a partitioned table, "ALTER TABLE MOVE" can only be done at the partition level, not at the table level. So yes, you only need to rebuild the actual partitions with hybrid columnar compression.
    Marc

  • What is the difference between lap and oltp

    HI experts,  I want to know the difference between OLAP and OLTP and why OLTP cannot be used in bw instead of OLAP? Need realtime anwsers please!!!!!!!!!!

    hi navin...
    Online transactional processing (OLTP) is designed to efficiently process high volumes of transactions, instantly recording business events (such as a sales invoice payment) and reflecting changes as they occur.
    Online analytical processing (OLAP) is designed for analysis and decision support, allowing exploration of often hidden relationships in large amounts of data by providing unlimited views of multiple relationships at any cross-section of defined business dimensions.
    OLTP databases are typically input sources for data warehouses or data marts. The data warehouse in turn is the typical source of data for an OLAP database. The value in an OLAP database is that many complex calculations and predefined queries are preprocessed and results are stored and are available via an OLAP exploitation application allowing quick access to cross-sections of business data. Rapid access to the aggregate information across defined business dimensions allows quick navigation and understanding of relationships.
    The challenge is to find a solution that will both supply the necessary functionality while addressing the technical considerations of your organization. Some other important considerations include choosing technologies that can leverage existing investments in both hardware and software, and are open and integrated so that your applications are adaptable. This ensures flexibility and agility to meet future business demands.
    There are several different modeling techniques. Snowflake and star schemas are just two of many choices. Deciding the best approach for your situation will depend on several factors, most importantly understanding the business issue, the users and their information needs. There is a wealth of information available, including courses, texts and guidelines on this subject alone
    OLAP systems organize data in a multidimensional model that is suitable for decision support. OLAP is the analytical counterpart of OLTP, or Online Transactional Processing. SAP's BW is an OLAP system
    The Impact of the OLAP/OLTP Cultural Conflict on Data Warehousing....check this link....
    http://www.georgetown.edu/users/allanr/Impact.pdf
    also check...
    http://expertanswercenter.techtarget.com/eac/knowledgebaseAnswer/0,295199,sid63_gci977813,00.html
    The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
    Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.
    hope it helps...

  • Reg: Exadata and /*+ FULL */ hint -

    Hi Experts,
    Recently, our database got migrated to Exadata environment, and a DBA told me that using the /*+ FULL */ hint in the query increases the query performance.
    Doubt -
    1) Does it actually enhance performance?
    2) I read some articles and got some information that Exadata does some kind of "Smart Scan" and "Cell Offloading" which makes the query efficient. But how does FULL hint contribute here?
    This links talks something about this, but not sure if correct - Some Hints for Exadata SQL Tuning - Part III - SQL Optimizer for Oracle - SQL Optimizer for Oracle - Toad World
    Please share your thoughts and advise.
    Thanks and Regards,
    -- Ranit
    ( on Oracle 11.2.0.3.0 - Exadata )

    Ranit -
    Lots of good advice given by others. A little more to add to the comments already made...
    Using a full hint as a general tuning rule on Exadata would not be a good idea, just like the sometimes proposed notion of dropping all indexes on Exadata to performance is not a good idea. As Mohamed mentions, a key performance optimization for Exadata are the smart scans, which do require direct path reads. Pushing for smart scans is what drives these types of ideas; because, other than the index fast full scan, index scans will not smart scan. However, smart scanning isn't always faster. OLTP type queries that are looking for one or two rows out of many are still usually faster with an index even on Exadata. If you find using a hint like FULL does improve a query's performance, then just as with using hints in general, it's better to determine why the optimizer is not picking the better execution plan, a full table scan in this case, in the first place; and resolve the underlying issue.
    What you will probably find is you are over-indexed on Exadata. If you have control of the indexes in your environment, test by making certain indexes invisible and seeing if that helps performance. Indexes that were created to eliminate a percentage, even a large percentage, of rows, but not almost all rows for queries are candidates to be dropped. You definitely want to tune for direct path reads.
    This is done by doing index evaluations as described; making sure your stats are accurate and up-to-date; as mentioned by Franck, be sure to gather the Exadata system stats - as this is the only thing that helps the optimizer be Exadata aware. And also, especially if you are running a data warehouse workload, you can look into using parallelism. Running queries in parallel, often even with a degree as little as 2, will help prompt the optimizer to favor direct path reads. Parallelism does need to be kept in check. Look into using the DBRM to help control parallelism - possibly even enabling parallel statement queuing.
    Hopefully these will give you some ideas of things to look at as you enter the realm of SQL Tuning on Exadata.
    Good luck!
    -Kasey

  • ExaData and Stand By

    Hi,
    Please advice regarding the following senario :
    Have an Exadata machive x2-2 half rac.
    I want to use Exadata Hybrid Columnar Compression in order to speed performance.
    In the DRP site have a linux rehdat x86-64 11gr2 single instance.
    What need to be done in the None exadata instance in the DRP site in order to allow it to apply the archives that was compressed in the Exadata machine ?
    Is it possible ?
    Thanks

    Standby databases can apply redo generated for changes to EHCC objects. However, when the standby is opened, those EHCC compressed segments will not be able to be queried. To enable query against those segments, you'll have to migrate them to a non-EHCC format (either OLTP compression, standard compression, or no compression). This will require space and time that depend on the size of the object and its compression ratio. This decompression can be done on a non-Exadata platform.
    For more details, see the whitepaper at http://www.oracle.com/technetwork/database/features/availability/maa-wp-dr-dbm-130065.pdf

  • Exadata and Out of the Box Software

    Hello
    I would like to know does exadata servers come with 11gR2 software pre-installed ?
    What about the Grid Control ?
    Regards
    Senthil

    Exadata comes with 11gR2.
    Oracle ACS will assist with all installation steps to ensure your Exadata Database Machine is up and running before they leave the site.
    The Grid Control plugins will also be installed and is essential in monitoring Exadata. It is currently the only supported plugin for monitoring purposes.
    Oracle ACS does a fine job and is very thorough. Be sure to take as many notes/screen captures when they are present. It will be very beneficial.
    - Wilson
    www.michaelwilsondba.info

  • Exadata and System Statistics

    Hi, there,
    This might be a dumb question – but is it necessary to gather system statistics on Exadata machines?
    I (fairly) recently migrated my Production EDW from a V2 quarter-rack to an X3-2 quarter-rack. On a “normal” system, if I migrated the database to a different (faster) server, I would look at regathering the system statistcs.
    Is this something that’s sensible or worthwhile with Exadata?
    Mark

    Hi Mark,
    Before you gather system stats you can run the following sql to get your current values.
    SET SERVEROUTPUT ON
    DECLARE
      STATUS VARCHAR2(20);
      DSTART DATE;
      DSTOP DATE;
      PVALUE NUMBER;
      PNAME VARCHAR2(30);
    BEGIN
       PNAME := 'CPUSPEEDNW';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('cpuspeednw                  : '||pvalue);
       PNAME := 'IOSEEKTIM';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('ioseektime in ms            : '||pvalue);
       PNAME := 'IOTFRSPEED';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('iotfrspeef                  : '||pvalue);
       PNAME := 'SREADTIM';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('single block readtime in ms : '||pvalue);
       PNAME := 'MREADTIM';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('multi block readtime in ms  : '||pvalue);
       PNAME := 'CPUSPEED';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('cpuspeed                    : '||pvalue);
       PNAME := 'MBRC';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('multiblock read count       : '||pvalue);
       PNAME := 'MAXTHR';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('max threads                 : '||pvalue);
       PNAME := 'SLAVETHR';
       DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
       DBMS_OUTPUT.PUT_LINE('slave threads               : '||pvalue);
      END;
    Best advice I can give would be to check Doc ID 1274318.1 and search for dbms_stats.
    Regards,
    Tycho

  • Exadata and Oracle VM

    The other question is whether the subject is supported to install Oracle VM (configured according to the note http://www.oracle.com/technology/tech/virtualization/pdf/ovm-hardpart.pdf) at the nodes of Exadata.
    Thank you very much for the help

    Thank you for your clarification Uwe.
    Indeed I mixed up the two items. Sadly I hoped that there was indeed an Oracle VM for Exadata, which would have been logical / practical for a SME organization moving to the Exastack platform.
    Just to see that I understand the current Exastack Engineered systems offers, I am outlining the salient points:
    1. Exadata - Is designed (marketed) to be an Oracle 11g + RAC exclusively, running under Oracle Linux. CPU licensing is satisfied by physically (hardware) disabling CPU's supplied in the unit.
    2. Exalogic - Is designed (marketed) for the middle-ware / applications, Using Oracle VM + Oracle Linux, and optionally the Oracle SOA Suite middleware Applications, plus third party application, and possible other host O/S supported by Oracle VM.
    If an organization has a web application, based on traditional web server front tier, application server tier (JAVA / JEE), and a data tier (Oracle DB), then what are the choices to move to the Exastack platform:
    1. Move the Web / application server tiers to Exalogic, and the data tier to Exadata, seems the way the engineered platform is marketed, however depending on the application design / usage, this may not justify that level of investment. e.g. If the application is application intensive, then an Exalogic with a relatively small Oracle DB instance is required, alternatively if there is a DB intesive application, then an Exadata with a relatively small web / application server tier is required. Hence perhaps would it not be logical to offer such a hybrid "Exastack" model for SME starting out, then when volumes demand it scaling horizontally:
    e.g. Get Exadata for the Oracle 11g + RAC, and reserve 2 or more processing units (X3-2) for the middle-ware (Oracle VM + Oracle Linux) in the same physical rack, when volumes increase, a dedicated Exalogic is added and the middle-ware applications migrated to the Exalogic platform, freeing up (scaling up) the Exadata platform.
    In addition, given the licencing restriction, can Exadata be partitioned for a production database instance/s and development instance/s on the same physical rack? Note; this might be somewhat worked around, since a HA demanding business may have a DR site with a second Exadata, which can host the development DB instances when in normal operations mode.
    I feel this is more of a marketing rather than technical issue here, but if there is some flexibility in the Exastack configuration, would simplify and lower the affordability bar for SME / start-ups.
    Appreciate any views.
    Best regards,
    Jesmond

  • Exadata and smart scans

    Hi,
    I have an Oracle RDBMS 11gR2 that runs on Exadata.
    I have several processes that run with a full table scans with resonably "huge" tables (up to few GB).
    I would expect that a "smart scan" be used to "speed up the process".
    The runs are working for alreay hours.
    I get execution plans of that kind:
    | Id  | Operation                    | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                               
    |   0 | INSERT STATEMENT             |                           |       |       |  3532K(100)|          |                                                                                                                                               
    |   1 |  LOAD AS SELECT              |                           |       |       |            |          |                                                                                                                                               
    |   2 |   FILTER                     |                           |       |       |            |          |                                                                                                                                               
    |   3 |    HASH JOIN                 |                           | 24634 |   344M|  3532K  (1)| 15:42:08 |                                                                                                                                               
    |   4 |     TABLE ACCESS STORAGE FULL| DPV_97_130556316111576318 | 24634 |   625K|   400K  (2)| 01:46:42 |                                                                                                                                               
    |   5 |     TABLE ACCESS STORAGE FULL| DPV_96_130556316111576318 |   194K|  2724M|  3132K  (1)| 13:55:26 |                                                                                                                                               
    Can someone explainsme how I can check is smart scan is used, via a query?
    If not used, can someone explains me how to "enable" and under which conditons I should?
    Thanks by advance for any tips.
    Kind Regards

    HELLO,
    If the explain plan shows the storage clause in the plan then the query goes for smart scan.
    Query goes to smart scan when below conditions are met.
    1.Segment_size>_small_table_target(hidden init.ora parameter)
    2.DB Buffer Cache should not have more than 50% data blocks of the table.
    3.Dirty buffers in db_buffer_cache should be less than 25%.
    Smart scan is applied to queries which goes for full table scans,parlell queries,Index fast full scan.
    Hope this will be helpful.
    Regards,
    Thimmappa

  • Adaptive Cursor Sharing and OLTP databases

    Hi all,
    I am reading a book about performance that affirms that, for OLTP environments, it is recommended to disable the Adaptive Cursor Sharing feature (setting the hidden parameter OPTIMIZEREXTENDED_CURSOR_SHARING_REL to NONE and the CURSOR_SHARING to EXACT). The book recommends this to avoid the overhead related to the feature.
    I know that with this feature you can avoid the Bind Peeking issue as it creates more than one execution plan for the same query. So, it is really a good practice to disable it?
    Thanks in advance.

    OK, thanks for pointing out.
    Getting back to your original question:
    So, it is really a good practice to disable it?No, it is not, especially not without approval of Oracle Support.
    Furthermore, I feel confirmed by that point from the book review by Charles Hooper as well as his additional points:
    The book states that in an OLTP type database, “we probably want to disable the Adaptive Cursor Sharing feature to eliminate the related overhead.” The book then suggests changing the CURSOR_SHARING parameter to a value of EXACT, and the OPTIMIZEREXTENDED_CURSOR_SHARING_REL parameter to a value of NONE. First, the book should not suggest altering a hidden parameter without mentioning that hidden parameters should only be changed after consulting Oracle support. *Second, it is not the CURSOR_SHARING parameter that should be set to a value of EXACT, but the OPTIMIZERADAPTIVE_CURSOR_SHARING parameter that should be set to a value of FALSE (see Metalink (MOS) Doc ID 11657468.8). Third, the blanket statement that adaptive cursor sharing should be disabled in OLTP databases seems to be an incredibly silly suggestion for any Oracle Database version other than 11.1.0.6 (this version contained a bug that lead to an impressive number of child cursors due to repeated executions of a SQL statement with different bind variable values)*. (page 327)"
    http://hoopercharles.wordpress.com/2012/07/23/book-review-oracle-database-11gr2-performance-tuning-cookbook-part-2/

Maybe you are looking for

  • Where clause in select query

    Hi Experts, I want to fetch data from PAYR table where ZALDT date falls in the date range(s_date) or VOIDD date falls in the date range. Which one is correct? SELECT ZBUKR              CHECT              ZALDT             VOIDD into table T_PAYR from

  • Air Server

    I have an iPod 4g running iOS 6.0.1 and I have seen a lot of videos saying that if you connect your iPod to your PC theough Wi-Fi the turn on Apple's Air Server, then you can mirror your device on the screen with any application. Mine only mirrors vi

  • Dynamically add an in memory Bitmap to a List

    I'm trying to dynamically add an in memory Bitmap to a List control. I'm trying to create a thumbnail from a larger bmp image and display the thumbnail in the list. The thumbnail images are being created ok but the bitmaps are not displaying in the L

  • Transaction Control in Dynamic Regions

    Hi, I have few questions on dynamic regions, i would like to clarify them. I have 2 task flows which initiate a transaction on initialization. One of them will be shown in dynamic region by clicking link1 in left panel. Dynamic region is in right pan

  • Illustrator CC will not get past 95% when downloading

    I'm downloading Illustrator CC from the Adobe Application Manager and it will not complete the download, it gets to 95% and has not moved for hours - any ideas??