Exadata for OLTP

I was reading a sbout exadata and am confused if it provides any for OLTP databases where only minimal rows are retrieved using indexes there by making exadata smart scan and storage indexes useless.The advantage I can think of is high speed flash cache and flash logging features.
But can't this be obtained by using any other high speed machines and high speed disks like SSD's used as database flash(11g feature).Can you shed some light on this topic.
Thanks
sekar

Hi,
migrating to Exadata could be beneficial for an OLTP system: you could fit the entire database up to 22 Tb into the Exadata Smart Flash Cache, and have other nice things like Infiniband, smart scans (which could be useful for OLTPs as well), HCC compression etc.
It's just that it won't be as beneficial as for DSS or mixed systems, and it would cost a lot. I think that if you don't have an analytic component on the top of your OLTP, and if you don't require things like High Availability etc. then you may be better off with a regular Oracle 12c database on SSD storage.
But these are just very basic considerations, details depend on your requirements. You will need to sit down and calculate costs for different options, then compare them.
I would also recommend to review the database thoroughly -- it could be possible to achieve required performance by tuning, not by hardware upgrades. You could save your company hundreds of thousands of dollars if you do that.
Best regards,
  Nikolay

Similar Messages

  • Exadata and OLTP

    hello experts,
    in our environment, OLTP databases (10g,11g) are on single instance mode and we are planning to do a feasibility analysis on moving to Exadata
    1) as per exadata related articles, exadata can provide better OLTP performance with flash cache.
    if we can allocate enough SGA as per the application workload then what is the meaning in moving in exadata?
    2) any other performance benefits for OLTP databases?
    3) since exadata is pre-configured RAC, will it be a problem for non-RAC databases which are not tested on RAC
    in general, how can we conduct an effective feasibility analsysis for moving non RAC OLTP databases to exadata
    thanks,
    charles

    Hi,
    1.Flash cache is one of the advantage in Exadata, to speed up your sql statement processing.Bear in ind that it s done on the storage level and it should not be compared directly with a non Exadata machine.
    2.As far as I know, besides faster query elapsed, we can also benefit from compression (hybrid columnar compression - Exadata spesific)
    and also as the storage is located inside the Exadata machine, thus will decrease the I/O factor of your database perfromance.
    3.you can have a single node database in Exadata.just set the connection to directly used the phyisical ip, instead of using scanip (11g) for RAC.
    I think the best thing to access, is project the improvement and cost saving if you are going to migrate to Exadata.Access the processing improvement you will gain, the storage used and also the license cost.usually,most shops used Exadata to consolidate their different physical db boxes
    br,
    mrak

  • OCMD for OLTP database ?

    Hi all,
    Can we use OCDM for OLTP standard database (as a standard data model to standadize the OLTP schema model, such as sales, customer, billing) ? or OCDM use only for Data Warehouse system ?
    Rgds.
    VP.

    Dear VP,
    Sorry for the delay in answering your note.
    OCDM is normally planned as Enterprise Data Warehouse, that stores relevant transactions from OLTP systems (Billing, CRM, Customer Care, Trouble Ticket, Point of Sales, Network Management or even Network Element statistics,...). To use OCDM as OLTP standard DB, you would use in fact only the foundation layer (atomic granularity) for the applications you want (while automatically benefiting from the analytics part).
    I don't see any strong arguments against your idea, providing some assumptions and maybe "warnings":
    - I assume you would develop your applications on top of OCDM. Leveraging an existing application would mean you would have to either adapt OCDM to fit the application or vice versa.
    - The SID certification of OCDM over 130 business entities (ABEs) in 6 domains would insure the fact you could map most eTOM processes with your application.
    - You can start in a small area, and extend slowly as requirement grows.
    - From a use perspective, you should make sure the structure fits your expectation around history. A DWH does not like "update" (for performance reason). It stores the object and each status change. Hence, you might keep more data that what you would like, especially if you start updating events (like customer order). So you shall decide whether you will update or insert only but plan your hardware consequently (+ Information Lifetime Management to drop consequently old or useless data).
    - From a data loading perspective, you will face the same issues as with a standard DWH. OCDM expects clean data. Therefore, you must do some data quality check and data cleansing in the staging area (and certainly adapt the default lookup values used by OCDM or provided by the data sources). A simple example: I can store raw CDRs in OCDM. But it will not load if the expected fields are not at the right place or if you have duplicated events, what can easily happen in raw CDRs!
    - From a hardware perspective, depending on how big you plan your applications, Exadata could be ideal because it can support a mix load of complex and regular queries, on top of benefiting from the Hybrid Columnar Compression and other nice features.
    - One advantage is that you could leverage by default the analytics part into your system (Segmentation, CLV, sentiment analysis, prepaid and postpaid churn, targeted campaigns, call center statistics, revenue and traffic forecast...) on top of having the tools to further develop your own analytics to support the process.
    - You should also think upfront about dealing with the regular (at least once a year) OCDM version upgrade and about support. We can support OCDM as product as it is sold, not your application on top.
    Feel free to contact OCDM Product Mngt team directly (contact Oracle CGBU or Tech sales rep) for specific development agreement if you see further interest in such development.
    Best regards
    Axel.

  • How to specify COMPRESS FOR OLTP on a table in physical model?

    Hi,
    we have licensed Oracle's Advanced Compression and want to use the OLTP compression on some tables. I am looking for a way to specify COMPRESS FOR OLTP on a table in the physical model. So far, I can only set "Data Compression" to YES or NO.
    Are you going to add the "new" compression modes in the next release?
    Thanks,
    Frank
    Version of SQL Developer Data Modeler is 3.1.3.709

    Hi Frank,
    Are you going to add the "new" compression modes in the next release?There is support for compression type (including OLTP) in DM 3.3 EA and you can download it from OTN.
    Philip

  • UNDO Management - Config changes for OLTP and large background processes

    Hi,
    I am running 9i on a 2 node RAC cluster - Sun Solaris.
    I use Automatic Undo.
    The database is currently configured for OLTP and has 2 undo datafiles - 1GB in size and undo_retention set to default (900seconds)
    I am about to start scheduling a batch job to run each night that will delete approx 1 million records each time it runs.
    To get this to work without the dreaded Error -30036: ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDO01', I have increased the undo datafiles to 5GB and the undo_retention to 4000.
    The question is do I need to worry about switching between these settings - using the smaller values for daytime OLTP and the nighttime settings for the heavy processing?
    What issues might I encounter if I just left the UNDO Management settings in place that I use for the large delete?

    I would say no, leave the settings the highest level required to accomplish the work of the instance. Once the setting are correct for the instance, you should not have to change them around. They are really max setting for the heaviest load on your instance.

  • Create Table with Compress for OLTP and error ORA-14464

    Hello,
    i have a Oracle-DB 11.2 and want to use Advanced Compression.
    I want to create a table:
    CREATE TABLE TD_GE_1990
    ( "name_id" NUMBER(1,0),
    "name_txt" VARCHAR2(100 BYTE)
    ) COMPRESS FOR OLTP;
    But i get:
    SQL-Fehler: ORA-14464: Kompressionstyp nicht angegeben
    The "compatible"-Parameter is set to 11.1:
    SELECT value
    FROM gv$parameter
    WHERE name LIKE '%compatible%';
    11.1.0.0.0
    Do i have to change something in the database?
    Best regards
    Heidi

    14464, 00000, "Compression Type not specified"
    // *Cause: Compression Type was not specified in the Compression Clause.
    // *Action: specify Compression Type in the Compression Clause.                                                                                                                                                                                                                                                                                                                                                                                   

  • Data archival and purging for OLTP database

    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    vara

    user11261773 wrote:
    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    varaFBDA is the better option option .Check the below link :
    http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive
    Good luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Is 10g RAC ready for OLTP

    To achieve high availability by eliminating single point of failures in the following areas we are thinking of having our OLTP DBs on a single 10g RAC cluster
    i) OS/Firmware patches requiring reboots
    ii) Unplanned server failures
    iii) One-off Oracle patches
    We have migrated our DSS systems to 10g RAC (windows x64). However, in the last 9 months since we deployed we have seen 2 issues: single node eviction & multiple node evictions. Single node eviction is supposedly fixed w/ a patch that needs clusterwide shutdown.
    The baseline for me on OLTP is 8i where I have taken DT once in 2 years to apply Oracle patches, once a year for OS patches and very rare server failures resulting in DB failover.
    Questions I have:
    a) Is 10g RAC really stable to be used for OLTP?
    b) How is this being designed elsewhere with a view to reduce planned/unplanned DTs
    thanks,
    SM

    > a) Is 10g RAC really stable to be used for OLTP?
    Loaded question as you are implying that until now, RAC has not been stable and not robust enough for OLTP.
    Stability for any system is dependent on:
    - platform h/w
    - storage h/w
    - network h/w
    - o/s s/w
    - application s/w
    - administration
    What RAC buys you is having multiple database instance for a single physical database. Which means that in the worse case where you are forced to down a platform because of one of the above reasons, the remaining platforms in the cluster should still be available.. courtesy of the share everything approach.
    But RAC alone is not the answer.. there are numerous factors to consider. One of my longest uptime databases is an Oracle SE server with a 12,000+ uptime. And it is used 24x7 as a data collection platform.
    It went down recently. The cause? Network errors and power failures that resulted in the rack cabinet housing this server, to be reset.
    Have numerous examples of how unforseen events caused disaster in a computer room. From dirty electrical power to an aircon automated switchover failing.
    RAC does not solve any of these. What happens when there is a power failure or h/w error with the switch used for the Interconnect? Without the nodes being able to communicate with one another, all nodes will be evict themselves from the cluster.
    Looking at RAC alone as The Solution to your H/A requirement is a bit naive IMO. Yes, RAC is an excellent and major cog in the wheel of H/A.. but there are others too.
    Q. Is 10g RAC really stable to be used for OLTP?
    A. As stable and as robust as you make it to be.

  • Compress for OLTP and ORA-14464

    Hello,
    i have a Oracle-DB 11.2 and want to use Advanced Compression.
    I want to create a table:
    CREATE TABLE TD_GE_1990
    (     "name_id" NUMBER(1,0),
         "name_txt" VARCHAR2(100 BYTE)
    ) COMPRESS FOR OLTP;
    But i get:
    SQL-Fehler: ORA-14464: Kompressionstyp nicht angegeben
    The "compatible"-Parameter is set to 11.1:
    SELECT value
    FROM gv$parameter
    WHERE name LIKE '%compatible%';
    11.1.0.0.0
    Do i have to change something in the database?
    Best regards
    Heidi

    This post is related to following thread and got resolved by changing compatible parameter to 11.2.0:
    http://translate.google.co.in/translate?hl=en&sl=ko&u=http://kr.forums.oracle.com/forums/thread.jspa%3FthreadID%3D1594232%26tstart%3D345&ei=_u7QTKCwKIOfcbqxqdgL&sa=X&oi=translate&ct=result&resnum=2&ved=0CCMQ7gEwAQ&prev=/search%3Fq%3DORA-14464:%2BKompressionstyp%2Bnicht%2Bangegeben%26hl%3Den%26client%3Dsafari%26rls%3Den

  • Oracle compress with option FOR OLTP

    Hi
    I'm trying out compression on oracle but for unknown reason i cant use option "for oltp" (se below)
    any ideas  ?
    SQL> create table ct(x int) compr for oltp
      2  ;
    create table ct(x int) compr for oltp
    ERROR at line 1:
    ORA-00922: missing or invalid option
    SQL> create table ct(x int) compress;
    Table created.
    SQL> drop table ct
      2  ;
    Table dropped.

    There is no compr keyword, it's compress
    SQL> create table ct(x int) compr for oltp;
    create table ct(x int) compr for oltp
    ERROR at line 1:
    ORA-00922: missing or invalid option
    SQL> create table ct(x int) compress for oltp;
    Table created.
    Cheers Michael

  • Creating Site for OLTP

    Hello Everyone,
    When I am trying to create a site type for OLTP using admin console (SMOEAC), the Object Type field in the left screen area is pre populated with employee object type and it is not letting me to select site object type from the dropdown, so that I can create a site for OLTP. In other words I only see Employee object type and no other object types to select from in the object type drop down menu.
    Please help me defining the object of type site.
    Thanks.

    Hi Namrata,
    There are two SAP roles that specify the authorization to create or maintain objects within the Administration Console:  SMOEAC  
    role SAP_CRM_MWAC_ADMINISTRATOR for employees, sites, organizational units and subscriptions and  role SAP_CRM_MWAC_CUSTOMIZER for replication objects and publications.

  • Difference between " COMPRESS FOR ALL OPERATIONS" and "COMPRESS FOR OLTP"?

    I was looking through Oracle's OLTP Table Compression (11g onwards) documentation as well as online resources to find the syntax and came across two different versions:
    COMPRESS FOR ALL OPERATIONS
    and
    COMPRESS FOR OLTP
    The documentation I looked through didn't mention any alternative syntax, so i was wondering if anyone here might know the difference.
    Thank you!

    Table Compression Enhancements in Oracle Database 11G Rel1 as as follows:
    The compression clause can be specified at the tablespace, table or partition level with the following options:
    •NOCOMPRESS - The table or partition is not compressed. This is the default action when no compression clause is specified.
    •COMPRESS - This option is considered suitable for data warehouse systems. Compression is enabled on the table or partition during direct-path inserts only.
    •COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the simple COMPRESS keyword.
    •COMPRESS FOR ALL OPERATIONS - This option is considered suitable for OLTP systems. As the name implies, this option enables compression for all operations, including regular DML statements. This option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher.

  • 11.2 new features for OLTP compression, datapump compression

    Hi All,
    I am working related to datawarehouse and i am looking forward to impliment 11.2 new OLTP compression features. When i am reading some articles its telling me that i need seperate license for that. What about datapump compression and do i need license for that as well?
    Appriciate if some one can share any experience/links about this feature.
    I did some testing and it reduced the space nearly 50%. Hope this would be a great feature to look at.
    Karunika

    If you are working with a data warehouse, why do you want to use the new OLTP compression features? Normally, the older (and free) table compression functionality worked perfectly well for data warehouses.
    I believe that DataPump compression is part of the Advanced Compression Option which does require additional licensing. Straight table compression has been available for a while, though, does not require an additional license (beyond the enterprise edition, not sure if it's available in standard) and is generally ideal for data warehouses.
    Justin

  • Looking for OLTP documentations

    Dear All,
    I would like to see usable (!) documentations on building ORACLE OLTP systems. (I would like to see real world examples, and not heroic poems!)
    Please advise.
    Thanks,
    Franky

    Hi!
    I just want to see the methods how experienced people find a way to eliminate speed / maintenance and other problems. This would include data modeling, changes made to the Oracle initialization parameters etc. I want to see how changes are done and why the particular approach was chosen.
    I have built some OLTP systems so this field isn't new for me, but I want to see and learn new strategies.
    I would be happy if you gave me brief introduction about your OLTP systems, how and why the data models were made and how did you solve problems during the operation.
    Thanks in advance!
    Franky

  • How to restrict the change access in CRM for OLTP orders

    Hi Guru's,
    Please let me know  how to restrict the change access in CRM for the orders that are created in ECC. The ECC orders will only for display in CRM but not for change,
    We have  the orders that are  created in ECC, it will flows to CRM and should restrict the access to get in to the change mode in CRM but as of now CRM  system is allowing change mode for ECC orders and ending up with errors.
    Is there any additional middleware parameter that needs to be added to SMOFPARSFA table to get this functionality! Please advice! Thank your for your help.
    Regards
    Suneel

    Hi.
    You can use the PFCG role to control if the user is able to create, change, delete or only display a business transaction type.
    Regards.

Maybe you are looking for

  • Has Mountain Lion solved the issue of not all sent emails being saved?

    The Snow Leopard forum has a thread about Mac Mail not saving every sent item in the "Sent" folder. My email account is IMAP and it seems to work fine for most of the time except that, every now and again, a message is sent (and it really is sent) bu

  • Reduced images when viewing a forum with safari

    Hello all, I was trying to view an image that had been posted on a forum that I am subscribed to, using safari on my iPhone. I know that the image is 800x600 because I saw it full size with my office computer BUT when I try to view the same image on

  • Safari v Firefox

    I have used Firefox for years on Windows machines having found it the best for my purposes . so installed it on my new MBA. After trying Safari found  it good but lacks a lot of the add ons  .... however just found out that Safari has battery managem

  • Grant access to SYS.V$TEMP_SPACE_HEADER view - how to?

    Hi, I created a user. I am trying to give select access on some of the System tables and views to this user to retrieve some information about the database. When I try grant select on sys.v$temp_space_header to usr1; I am getting the following error

  • E-mail integration for my BB

    hi. i have a problem about e-mail integration. my device doesn't accept my e-mail address. how can i fix it. thanks alot.