Migrating to exadata - DB_BLOCK_SIZE

Hi,
We plan to move our DWH , 5 tera database 11gr2 running on linux machine to exadata 1/4 rac machine.
We are currently running with 32K block size.
Must of our statments are using FTS , and number of indexes is small.
We are to get your advice if we should stick with the 32K block size on the exadata machine , or install the databases on exadata machine with 8K block size.
Thanks

Pasalapudi wrote:
The lesser the block size is better as we have many features of exadata like SmartScan, Storage Indexes works on block level.This is not correct. Storage indexes work on 1MB storage regions and the block size is irrelevant for that.
Regards,
Greg Rahn | blog | twitter | linkedin

Similar Messages

  • Migrating from Exadata V1 to Exadata V2

    Hello Experts,
    I have small query, This month end we are planning to move a 6TB database from Exadata V1 to Exadata V2. Could you please tell me the fastest method moving such a huge amount of data using oracle provided tools. Below are the details (you must be knowing these):
    Exadata V1:
    OS: Enterprise Linux
    Oracle Version: 11.1
    RAC: YES (8 node)
    DB Size: 6TB
    Exadata V2:
    OS: Enterprise Linux
    Oracle Version: 11.2
    RAC: YES (8 node)
    Please let me know if you need more details about the environment.

    Well there are many ways to accomplish this task. The "best" depends on many factors but probably one of the most important is allowable down time. If you have a very large window, then use the simplest approach you can (datapump or CTAS across a DBLINK). If you have a very tight window you may have to pursue a more complicated strategy such as creating a standby or using Golden Gate. I have seen references to using ASM's rebalancing capability to do a (near) zero down time migration as well (though we haven't tested it yet). (i.e,. add new storage to the old ASM, remove old storage from disk group and let ASM rebalance) In the middle you may be able to squeeze time by pre-moving historical data and only moving current data during the outage. (a lot of systems have the big objects partitioned by time making this pretty easy to do). Obviously you'd want to bridge the Infiniband networks between the racks. Bottom line for me would be do the simplest thing that will fit in the window. And I would prefer logical over physical moves due to flexibility with changing storage characteristics (adding HCC for example). Ask you Oracle rep for a white paper on migrating to Exadata V2. There is one available, but it doesn't appear that it is available except to customers that have purchased an Exadata.

  • Oracle Database migration to Exadata

    Dear Folks,
    I have a requirement to migrate our existing Oracle Database to Exadata Machine. Below is the source & destination details:
    Source:
    Oracle Database 11.1.0.6 Verson & Oracle DB 11.2.0.3
    Non-Exadata Server
    Linux Enivrionment
    DB Size: 12TB
    Destination:
    Oracle Exadata 12.1
    Oracle Database 12.1
    Linux Environment
    System Dowtime would be available for 24-30 hours.
    Kindly clarify below:
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    2. Any upgarde activity after migration?
    3. Which migration method is best suited in our case?
    4. Things to be noted before migration activity?
    Thanks for your valuable inputs.
    Regards
    Saurabh

    Saurabh,
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    This would help if you wanted to drop the database in place as this would allow a standby database to be used which would reduce downtime or a backup and recovery to move the database as is into the Exadata.  This does not however allow you the chance to put in some things that could help you on the Exadata such as additional partitioning/adjusting partitioning, Advanced Compression and HCC Compression.
    2. Any upgrade activity after migration?
    If you upgrade the current environment first then not there would not be additional work.  However if you do not then you will need to explore a few options you could have depending on your requirements and desires for your exadata.
    3. Which migration method is best suited in our case?
    I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  At a high level I typically have recommended when moving to Exadata that you setup the database to utilize the features of the exadata for best results.  The Exadata migrations I have done thus far have been done using Golden Gate where we examine the partitioning of tables, partition the ones that make sense, implement advanced compression and HCC compression where it makes sense, etc.  This gives us an environment that fits with the Exadata rather then a drop an existing database in place though that works very well.  Doing it with Golden Gate eliminates the migration issues for the database version difference as well as other migration potential issues as it offers the most flexibility, but there is a cost for Golden Gate to be aware of as well so may not work for you and Golden Gate will keep your downtime way down as well and give you opportunity to ensure that the upgrade/implementation will be smooth by giving some real work load testing to be done..
    4. Things to be noted before migration activity?
    Again I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  In short here are some items that may help keep in mind exadata is a platform that has some advantages that no other platform can offer, while a drop in place does work and does make improvements, it is nothing compared to the improves that could be if you plan well and implement with the features Exadata has to offer.  The use of Real Application Testing Database Replay and flashback database will allow you to implement the features, test then with a real workload and tune it well before production day and allow you to be nearly 100% confident that you have a well running tuned system on the Exadata before going live.  The use of Golden Gate allows you to get an in Sync database run many replays of workloads on the Exadata without losing the sync giving you time and ability to test different workload partitioning and compression options.  Very nice flexibility.
    Hope this helps...
    Mike Messina

  • EBS database migrate to exadata server.

    Hi Experts,
    My EBS database is 11.2.0.3. I'm going to migrate EBS database to exadata server. Please advise what patches need to be applied on exadata server for EBS database migration?
    Thanks & Regards,

    Hello Angela,
    I think what you're looking for is in My Oracle Support document ID 1392527.1 "Oracle E-Business Suite Release 11i & R12 Patches Required with Oracle Database 11g Release 2 (11.2.0) on Exadata".  It has a detailed list of patch requirements for different combinations of EBS and database versions.
    Cheers,
    Marc

  • EBS Instance migration to Exadata Servers

    Hi Experts,
    Kindly note that we are in process of doing the ebs instance migration from OEL 6.4 64-bit OS to Exadata Server. And the proposed target os on the exadata server is OEL6.4.
    a. need to convert the non asm ebs db to asm on the target.
    b. migrate the instance to Exadata.
    Please suggest me the approach and notes.
    Regards
    Mohammed.Abdul Muqeet

    Migrating an Oracle E-Business Suite Database to Oracle Exadata Database Machine (Doc ID 1133355.1)
    As you are having Source database on linux (64-bit) and wanted to have your target database on Exadata (also linux 64-bit) then either you can use RMAN,rconfig or dbca to migrate the database.
    A. Using RMAN you can duplicate the database from non-ASm to ASM on Exadata
    B. Using dbca, you can create the database template and then can create the database on exadata ASM
    C. Similarly rconfig can also be used.
    RMAN is widely used to migrate the database to Exadata when source database is also on Linux 64-bit.
    Regards
    Mohammed.A.Muqeet

  • Migration to Exadata V2

    For my birthday, I'm getting 2 1/2 rack V2s (well, it is my birthday and they did sign the contract today :-)
    I'm looking for any tips/hints/papers/guidelines on migrating our current data warehouse. In particular, looking for any experience with "staging" the migration, e.g. initial migration, followed by tips/techniques on eliminating summary tables/MVs.
    One of our goals is to reduce the number of layers between the fact/dimensions and the end user. But also looking at how best to move current database and quickly take advantage of HCC.
    TIA!

    I would recommend starting with a through review of the docs on SmartScan and Hybrid Columnar Compression.
    My OOW presentation on HCC is here:
    http://www.morganslibrary.org/presentations.html
    scroll down to "OpenWorld 10/09"
    I have a copy of Kevin Closson's slides there too.
    There is a good argument to be made for indexes being obsolete. Consider what you read, and that thought, when deciding what to build and what to leave behind.
    To help you further, for example advising on HCC, would require knowing how your data will be used. Will anything be updated? How much is considered archival and how much will be accessed often for DSS purposes? Remember there are different compression levels and you need to use them appropriately.

  • E-Business Suite R12.1 on Exadata with Database Rel 12c - Upgrade and Migrate, or Migrate and Upgrade

    Given:
    E-Business Suite R12.1 running against non-RAC database release 11gR2
    Aspiration:
    E-Business Suite R12.1 running against database release 12c RAC on Exadata
    In the context of Oracle best practices, what would be a preferred approach for the database tier to meet the above aspiration, i.e. (a) Upgrade database on source and then migrate to Exadata OR (b) Migrate database to Exadata and then upgrade ?
    Appreciate thoughts from community members/Oracle support.
    Thanks,
    Rakesh

    Rakesh,
    It is necessary to refine Srini's statement:
    EBS does not need to be "certified" on Exadata.  See:
    Running E-Business Suite on Exadata V2
    https://blogs.oracle.com/stevenChan/entry/e-business_suite_exadata_v2
    E-Business Suite 11i, 12.0, and 12.1 are certified with Database 12.1.0.1.
    E-Business Suite 12.2 will be certified with Database 12.1.0.1 soon.
    Regards,
    Steven Chan
    Applications Technology Group Development

  • Migrate Oracle Database between Exadata.

    ways to migrate databases from one Exadata to other Exadata server.

    Exact migration method applicable for your scenario will depends on many factors, DB size,available downtime, no downtime, source and target DB versions etc. Please do share the details around exact requirements. In general you can either go for logical migration or physical migration.
    Logical migration methods-
    1. Export import using data pump
    2. Create table as select (CTAS)
    3. Golden Gate etc.
    Physical migration methods-
    1. RMAN backup restore
    2. Data Guard
    3. ASM rebalance etc.
    I would recommend to go thru following whitepaper to get more details. This talks about migration to exadata but most of the methods remain same even in exadata to exadata migration. In exa to exa migration, you get the benefit of connecting thru InfiniBand or using ZFS storage for quick sharing of backups etc. Moreover you don't need to worry about endian conversion.
    http://www.oracle.com/au/products/database/migration-to-exadata-whitepaper-1-129592.pdf
    Thanks,
    Abhi

  • Need advice migrating from AIX 7 filesystems to Exadata Linux ASM - Large DBMS

    We are using 11.2.0.3 and 11.2.04 databases on AIX 7.1 using AIX filesystems .   We have some 2TB databases and some much smaller.  About 50 production and 200 non-production databases.   We are migrating to Exadata 4 with Linux .   What is your advice on the method of migrations that should be used.   We may be able to take some outage time to do this.  

    I echo the data pump export/import recommendations. I've used data pump several times to migrate databases to Exadata - including an environment with a few DBs on AIX Power PC to Exadata last year. If you can take downtime, it is the simplest, most flexible and least risky method - and if you can put a little thought and extra effort into it can still be very performant. On Exadata it's good to setup the environment according to Oracle's published best practices - which usually means some configuration changes from your source. Data Pump allows you to set this up first and have it ready to go - then do the migration into a properly configured database. You can also put the source DB into read-only while the migration takes place if that helps the downtime requirements.
    Some suggestions to maximize performance and limit downtime:
    Consider using DBFS file system on the Exadata, and then mount it using NFS to your source DB servers, for the data pump file location. This may take a little longer on the export, but avoids having to do a separate copy of the files over the network afterward and can make up the time. Once on Exadata, importing off the local DBFS can really perform well.
    Use parallelism with data pump to speed up the export and import. The degree will need to be determined based on your CPU capacity, but parallelism will speed up the migration dramatically.
    If you're licensed for compression - use the compression with Data pump to minimize the file size.
    Precreate all your tablespaces first, and possibly even the schemas - this goes back to setting things up according to Exadata best practices. You can potentially use HCC and other things on the Exadata tablespaces if you so choose. You can always use the data pump mapping if you want to change a few things about the tablespace names and such from the source.
    If you're really trying to maximize the performance and minimize downtime, you can spend some time pulling out the DDL for your indexes and constraints from the source - and have them scripted. Then only export the data, not the indexes and constraints, and after the data is imported use your DDL scripts, with high degrees of parallelism, to create indexes and constraints afterward. Don't forget to alter the index objects to remove the parallelism afterward so not to leave a bunch of high parallel indexes in place. This method can usually perform much faster than letting data pump do this.
    Test well, and look for objects that don't migrate correctly or well with data pump and potentially use SQL scripts to bring them over manually.
    Look for opportunities with some objects, for example meta data or DDL that doesn't change, to pre-create on Exadata before taking the downtime and starting the migration.
    HTH,
    Kasey

  • Oracle Client 32 bit installation on Exadata Machine

    Hi,
    We are starting our migration to exadata next month.
    One of the issues we have is regarding to our Informatica ETL tool . Our application is licence to 32 bit.
    The database repository of this tool is currently running on linux redhat 5.5 64 bit with 11203 rdbms version.
    We had to install oracle client 32 bit software , in order to allow the application to connect to the database.
    Is it possible to install oracle client 32 bit on exdata ? If not what would you suggest ?
    Best Regards

    I can't speak for Informatica, but it should be able to connect to the database over the network, so the database server OS wouldn't matter in that case. That is, if Informatica runs on a different Linux machine that runs 32-bit Linux, it can connect over the network to the Exadata database server node. If Informatica requires to run on the database server directly, you'd have to ask them how they can support 64-bit Linux (or you may have to modify or add to your license).

  • Exadata and OLTP

    hello experts,
    in our environment, OLTP databases (10g,11g) are on single instance mode and we are planning to do a feasibility analysis on moving to Exadata
    1) as per exadata related articles, exadata can provide better OLTP performance with flash cache.
    if we can allocate enough SGA as per the application workload then what is the meaning in moving in exadata?
    2) any other performance benefits for OLTP databases?
    3) since exadata is pre-configured RAC, will it be a problem for non-RAC databases which are not tested on RAC
    in general, how can we conduct an effective feasibility analsysis for moving non RAC OLTP databases to exadata
    thanks,
    charles

    Hi,
    1.Flash cache is one of the advantage in Exadata, to speed up your sql statement processing.Bear in ind that it s done on the storage level and it should not be compared directly with a non Exadata machine.
    2.As far as I know, besides faster query elapsed, we can also benefit from compression (hybrid columnar compression - Exadata spesific)
    and also as the storage is located inside the Exadata machine, thus will decrease the I/O factor of your database perfromance.
    3.you can have a single node database in Exadata.just set the connection to directly used the phyisical ip, instead of using scanip (11g) for RAC.
    I think the best thing to access, is project the improvement and cost saving if you are going to migrate to Exadata.Access the processing improvement you will gain, the storage used and also the license cost.usually,most shops used Exadata to consolidate their different physical db boxes
    br,
    mrak

  • Reg: Exadata and /*+ FULL */ hint -

    Hi Experts,
    Recently, our database got migrated to Exadata environment, and a DBA told me that using the /*+ FULL */ hint in the query increases the query performance.
    Doubt -
    1) Does it actually enhance performance?
    2) I read some articles and got some information that Exadata does some kind of "Smart Scan" and "Cell Offloading" which makes the query efficient. But how does FULL hint contribute here?
    This links talks something about this, but not sure if correct - Some Hints for Exadata SQL Tuning - Part III - SQL Optimizer for Oracle - SQL Optimizer for Oracle - Toad World
    Please share your thoughts and advise.
    Thanks and Regards,
    -- Ranit
    ( on Oracle 11.2.0.3.0 - Exadata )

    Ranit -
    Lots of good advice given by others. A little more to add to the comments already made...
    Using a full hint as a general tuning rule on Exadata would not be a good idea, just like the sometimes proposed notion of dropping all indexes on Exadata to performance is not a good idea. As Mohamed mentions, a key performance optimization for Exadata are the smart scans, which do require direct path reads. Pushing for smart scans is what drives these types of ideas; because, other than the index fast full scan, index scans will not smart scan. However, smart scanning isn't always faster. OLTP type queries that are looking for one or two rows out of many are still usually faster with an index even on Exadata. If you find using a hint like FULL does improve a query's performance, then just as with using hints in general, it's better to determine why the optimizer is not picking the better execution plan, a full table scan in this case, in the first place; and resolve the underlying issue.
    What you will probably find is you are over-indexed on Exadata. If you have control of the indexes in your environment, test by making certain indexes invisible and seeing if that helps performance. Indexes that were created to eliminate a percentage, even a large percentage, of rows, but not almost all rows for queries are candidates to be dropped. You definitely want to tune for direct path reads.
    This is done by doing index evaluations as described; making sure your stats are accurate and up-to-date; as mentioned by Franck, be sure to gather the Exadata system stats - as this is the only thing that helps the optimizer be Exadata aware. And also, especially if you are running a data warehouse workload, you can look into using parallelism. Running queries in parallel, often even with a degree as little as 2, will help prompt the optimizer to favor direct path reads. Parallelism does need to be kept in check. Look into using the DBRM to help control parallelism - possibly even enabling parallel statement queuing.
    Hopefully these will give you some ideas of things to look at as you enter the realm of SQL Tuning on Exadata.
    Good luck!
    -Kasey

  • Exadata database server on solaris

    Is Solaris OS running on Exadata DB server is of big endian or little endian? Need this answer to decide on endian conversion requirement while migrating to exadata.

    Hi Tycho,
    i don' think he has a solaris exadata. he just want's to know if he gets one whether he can migrate direct big endian to big endian. Unfortunately endianness is a proerly of the chip architecture and not of the OS hence intel x86 chips have always been little endian unlike any of the risc architecture chips.
    I think the only big endian intel chips were probably the xscale, i860 Intel i860 - Wikipedia, the free encyclopedia ,i960 and itaniums.
    See Endianness - Wikipedia, the free encyclopedia for more endian goodness

  • Exadata for OLTP

    I was reading a sbout exadata and am confused if it provides any for OLTP databases where only minimal rows are retrieved using indexes there by making exadata smart scan and storage indexes useless.The advantage I can think of is high speed flash cache and flash logging features.
    But can't this be obtained by using any other high speed machines and high speed disks like SSD's used as database flash(11g feature).Can you shed some light on this topic.
    Thanks
    sekar

    Hi,
    migrating to Exadata could be beneficial for an OLTP system: you could fit the entire database up to 22 Tb into the Exadata Smart Flash Cache, and have other nice things like Infiniband, smart scans (which could be useful for OLTPs as well), HCC compression etc.
    It's just that it won't be as beneficial as for DSS or mixed systems, and it would cost a lot. I think that if you don't have an analytic component on the top of your OLTP, and if you don't require things like High Availability etc. then you may be better off with a regular Oracle 12c database on SSD storage.
    But these are just very basic considerations, details depend on your requirements. You will need to sit down and calculate costs for different options, then compare them.
    I would also recommend to review the database thoroughly -- it could be possible to achieve required performance by tuning, not by hardware upgrades. You could save your company hundreds of thousands of dollars if you do that.
    Best regards,
      Nikolay

  • Java software on Exadata

    Hi
    Our application has Java code:
    1) In process running on the same machine as the database [we do that because the CPU usage of this code is marginal
    vs the one of the db]
    2) In stored procedure started by oracle scheduler
    If we migrate to Exadata, can we still run this code?
    B.R.

    Sure you can, as long as your code runs on Intel cpus, under Linux (being java, this should be no pblm).

Maybe you are looking for

  • SharePoint 2013 (O365) Name Resolving Issues

    I am a global administrator for my O365 environment.  I am having issues with names resolving in SharePoint I am trying to add people as term set managers in the managed metadata interface.  If I just type someone's name and hit save, then it works (

  • How to determine highest profit group center

    Hi, Is there any bapi or fm or database table that can show all profit center group a profit center belong?

  • Weird issue with website(/s?)

    I guess this relates to networking? Anyway, for some arcane reason DeviantART stopped working, cleared cache, disabled all extensions, flushed history, flushed cookies, did everything, still nothing. installed midori and firefox, still not working. s

  • Bytes pending

              Hi,           Our MDB's (Using Queue) are processing the message perfectly, without any exception           , all messages are consumed, but Bytes Pending count goes on increasing even there           is no exception and finally we are gett

  • How to Transport iViews from Portal Dev system to Prod system?

    Hello friends, This is an urgent request and it would be of great help if you can give me a solution to this. We have a Development Portal and a Production Portal. There is no QAS Portal. We are on EP 5.0 and BW is 3.5. Coming to my question, I have