Exadata version

How can I tell version of exadata machine I'm connected to?
Mean whether X2, X3-2. X3-8 ....
example: machinemodel=X2-2 Full rack
A file named config.dat is mentioned in this doc
http://www.oracle.com/technetwork/articles/oem/exadata-commands-part2-402442.html
but I could not find that file on server I'm connected to (DB Host)
Any command, file I could use to get machinemodel?
thanks

One option would be to run "grep OneCommand /opt/oracle.SupportTools/onecommand/preconf.csv" from the first compute node. It **should** tell you what you're working with. For example, here's what's on my X2-2 half rack:
[enkdb01:root] /opt/oracle.SupportTools/onecommand
grep OneCommand /opt/oracle.SupportTools/onecommand/preconf.csv
# OneCommand,124,X2-2 HALF RACK,4,7You could also just check the hosts file (possibly) to see how many cells and compute nodes are in there, and look at something like the CPU/memory totals from the compute node. For CPUs, if /proc/cpuinfo shows 16 CPUs, it's a V2, 24 CPUs, it's an X2, and 32 CPUs, it's an X3. If free -g shows 72GB - it's a V2, 96GB or 144GB - it's an X2, if it shows 128GB or 256GB, it's an X3. IF you show 1TB or 2TB, it's an X2-8 or X3-8. The difference between an X2-8 and an X3-8 is on the storage layer, so you'd have to look there to check.
Hope this helps!

Similar Messages

  • Is there a way to create different diskgroups in exadata?

    We have a need to have some other diskgroups other than +DATA and +RECO.
    How do we do that? Exadata version is x3.
    Thanks

    user569151 -
    As 1188454 states this can be done. I would first ask why is it you need to create additional disk groups than the data, reco and dbfs disk group created by default? I often see Exadata users question the default disk groups and want to add more or change the disk groups to follow what they've previously done on non-Exadata RAC/ASM environments. However, usually the data and reco disk groups are sufficient and allow for the best flexibility, growth and performance. One reason to create multiple disk groups could be for wanting to have different two different redundancy options for a data disk group - to have a prod database on high redundancy and a test database on normal redundancy for example; but there aren't a lot of needs to change it.
    To add disk groups you will need to also re-organize and add new grid disks. You should keep the grid disk prefix and corresponding disk group names equivalent. Keep in mind that all of the Exadata storage is allocated to the existing grid disks and disk groups - and this is needed to keep the necessary balanced configuration and maximize performance. So adding and resizing the grid disks and disk groups is not a trivial task if you already have running DB environments on the Exadata, and especially if you do not have sufficient free space in Data and Reco to allow dropping all the grid disks in a failgroup - because that would require removing data before doing the addition and resize of the grid disks. I've also encountered problems with resizing grid disks that end up forcing you to move data off the disks - even if you think you have enough space to aloo dropping an entire fail group.
    Be sure to accurately estimate the size of the disk groups - factoring in the redundancy, fail groups and reserving space to handle cell failure - as well as the anticipated growth of data on the disk groups. Because if you run out of space in a disk group you will need to either go through the process again of resizing all the grid disks and disk groups accordingly - or purchase an Exadata storage expansion or additional Exadata racks. This is one of the reasons why it is often best to stick with just the existing Data and Reco.
    To add new grid disks and disk groups and resize the others become very familiar with the information in and follow the steps given in the the "Resizing Storage Griddisks" section of Ch. 7 of the Exadata Machine Owner's guide as well as the information and examples in MOS Note: "Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)". I also often like to refer to MOS note "How to Add Exadata Storage Servers Using 3 TB Disks to an Existing Database Machine (Doc ID 1476336.1)" when doing grid disk addition or resize operations. The use case may not match but many steps given in this note are helpful as is discusses adding new grid disks and even discusses creating a new disk group for occasions when you have cell disks of different sizes.
    Also, be sure you stay true to the Exadata best practices for the storage as documented in "Oracle Exadata Best Practices (Doc ID 757552.1)". For example, the total number of griddisks per storage server for a given prefix name (e.g: DATA) should match across all storage servers where the given prefix name exists. Also, to maximize performance you should have each grid disk prefix, and corresponding disk group, spread evenly across each storage cell. You'll also want to maintain the fail group integrity, separating fail groups by storage cell allowing the loss of cell without losing data.
    Hope that helps. Good luck!
    - Kasey

  • Can ACFS be installed on My Exadata ?

    Accoring the  article --Can ACFS be installed on Exadata ? (Doc ID 1326938.1)
    applies to :
    Oracle Exadata Storage Server Software - Version 11.1.3.1.0 to 11.2.3.1.0 [Release 11.1 to 11.2]
    Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.3 [Release 11.2]
    ACFS is not supported on Exadata storage and there are no plans to provide it.
    You should look at DBFS or a NAS filer when using Exadata Database Machine
    my Exadata version is  11.2.3.2.1
    Can ACFS be installed on My Exadata ?

    Hello 1515213,
    The short answer is no:  although the MOS note has not been recently updated, the status of ACFS on Exadata has not.
    As per the note, consider using DBFS or an external NAS device.
    HTH!
    Marc

  • How to extract Exadata .dmp files into older versions of Oracle

    Hi, Our customer has provided exadata .dmp files ( HCC compressed) and I don't have access to Oracle Exadata - are there ways or utiltiies to extract these dumps to Older versions of Oracle? or even Oracle Express.
    Thanks/Prasad

    Hi,
    To Export form Exadata (DB 11g R2) and Import into Older versions of Oracle (Ex. 10g)
    You have to use exp utility which is version 10g and imp by utility 10g
    Ex.  use oracle 10 client to connect to Exadata and run exp cmd  then imp.
    BR
    Sami

  • Can customers rebuild an Exadata machine with the latest stack versions?

    There’s a possibility that we’ll be purchase two new Exadata machines (X3) in the near future. I'd be getting very excited if I wasn't already entirely swamped :)
    If it happens, we’ll be asking Oracle to install the latest and greatest of the software stack when they arrive on-site with our new toys. Currently, this means:
    <i>OEL: 5.7 (with latest kernel)
    ESS: 11.2.3.2.1 (write-back FC, mmmmm!)
    RDBMS/GI: 11.2.0.3.17</i>
    Our current Production database is on a V2 machine and has the following versions of the stack:
    <i>OEL 5.5
    ESS 11.2.2.3.2
    RDBMS/GI 11.2.0.2 BP7</i>
    We are hoping, once the dust settles, that we can re-purpose our existing V2 machine as a Development environment. However, in order for that to be of any use, we need the software stack to match what will be running in Production on the X3s.
    As far as I understand, the upgrade path is as follows (as per 888828.1)
    <i>Upgrade the O/S to OEL 5.7 and the latest kernel on storage cells and comp nodes
    Upgrade the firmware on the IB switch to 1.3.3-2 (which we already have)
    Upgrade the Exadata Storage Server on the storage cells and comp nodes to 11.2.3.2.1
    Install the 11.2.0.3.17 GI and RDBMS binaries
    Upgrade ASM from 11.2.0.2 to 11.2.0.3.17
    Install the 11.2.0.3.17 RDBMS binaries
    Make/move/restore/copy Development onto the newly-upgraded V2 machine.</i>
    I’m wondering whether it’s better for us to upgrade the V2 machine from our current versions of the stack to the latest or whether it’s better to attempt a rebuild?
    As a customer, are we able to rebuild the stack ourselves with the new software or do we have to have Oracle come in and go through their installation process (we are putting a different version of the stack on than we presently have)?
    Mark

    frits hoogland wrote:
    I don't understand the answers.
    A V2 Exadata system (and up, X2, X3) is full supported up to the newest Exadata software releases, so you just can upgrade. Of course you need to check with MOS 888828.1 what path to take (not all software might be upgradable to the latest release in one go). Need need to puzzle, just upgrade.I'm fairly sure that we would be able to upgrade - in fact, when we weren't entertaining a hardware upgrade earlier in the year, I had planned out a upgrade path from our current versions to what was the current stack before the FlashCache became write-able.
    We didn't have much of a choice at this point because the V2 was planned to be our Production environment for the foreseeable. Our upgrade was possible, but would have been relatively cumbersome as we would have had to upgrade the O/S, the ESS on cells/nodes, the GI and then the RDBMS in chunks to satisfy the various pre-requisites.
    My question was whether it was possible/better/cleaner to simply rebuild the whole box with the latest software stack instead of upgrading now that the V2 environment is likely to be designated for Development if we get the new hardware and there isn't the associated pressure of it being a Production box.
    >
    If you want to change the space ratio between DATA and RECO, the easy path is to delete all the databases, remove the data and reco diskgroups, remove the grid disks on the cells, and create them again, and create the diskgroups on top of it. This also can be done online by dropping the griddisks per cell/storage server in ASM, recreating them with different sizes, and get them in ASM again.I believe that Tycho said he had to choose between upgrading the stack AND change the space ratio between his diskgroups OR just rebuild the system from scratch: and he chose to rebuild.

  • Where is the proof of Exadata load rates of 5Tb/hour

    I ask this because we cant come close to this.. Is it just a theoretical exercise? Or something that is very restricted?
    ie. if you load 5Tb of data consisting of 1 column of 10 characters, you can complete this in 1 hour?
    Whats the real world expectations for load times into the db (assuming the data already resides in DBFS - if that makes things any better)
    Oracle says ...
    Optimized for real-world data loading
    •Only Oracle provides multi-version read consistency with the ability to load at up to 5TB/hr
    Pg 41.
    http://www.google.com/url?sa=t&source=web&cd=11&ved=0CBEQFjAAOAo&url=http%3A%2F%2Fioug.itconvergence.com%2Fpls%2Fapex%2FNJOUG.download_my_file%3Fp_file%3D521.&rct=j&q=exadata%205tb%20dbfs&ei=Iwp3TJX_O83PngeSqM2dCw&usg=AFQjCNGD6h7WdPg9TsV2_TO0HEVfmt3ZVA&cad=rja
    Daryl.

    Hi Daryl,
    -exadata full rack
    - dbfs
    time sqlldr direct=true control=random.ctl parallel=true
    Let me try ...
    So my test ..
    8.2g in 400s, or 73G / hrYou have a full rack!!!!
    You should use the whole "power" that You have for processing this load. First of all, I suggest You to change the load strategy from sqlloader to ITAS (insert /*+ APPEND .... from select ... )or CTAS ( create table .... as select .... ) reading from External Tables. The main reason is, even if You are using the parallel=true clausule on sqlloader, We are talking about only one server, it's not dividing the load amog all servers, using slaves on all servers available for that.
    A "starting shot" on new load strategy would be changing the degree from external table to default ( create or alter the external table to parallel ) and also for the table table will be loaded. After that, before starting the load, You should enable parallel dml or parallel ddl ( depends on what You've choosen... in case of CTAS "alter session force parallel ddl", for ITAS "alter session force parallel dml") and verifying if the plan is using parallelism for both operations, load and select.
    As I don't have an idea of how is the workload of Your machine... If default degree consumes a high CPU time from those servers and due to that will impact the behaviour of the other sessions, I'd advise You to test the load with 32, 64 and 128 ( changing the parallelism from ET and also the loading table to those values).
    Test this and give us the results...My guess is the results will be better than what You can imagine... :)
    Regards,
    Cerreia

  • Slow query results for simple select statement on Exadata

    I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
    I ran an execution plan it returns just this:
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576       
         1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576  I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
    The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
    Edited by: k1ng87 on Apr 24, 2013 7:58 AM

    k1ng87 wrote:
    I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
    I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
    and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
    Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
    I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?)

  • Connectivity issue in RAC(exadata)

    Team,
    oracle version : 11gr2
    2 node rac
    exadata x2-2
    Application team is complaining the connectivity issue and they are telling we get connection after 5 to 8 hits on the application
    application logfile errors are below
    SQL Error: 17002, SQLState: 08006
    [o.h.u.JDBCExceptionReporter:234] : IO Error: The Network Adapter could not establish the connection
    Please can anyone guide me As dba what are the things we need to check
    Thanks
    Prakash GR

    Hi,
    Thanks for the information, i have one question, which ip in database server first communicate to application request
    is it scan ip, vip,host ip OR local listener IP? and is all db server ip's(scan ip, vip, host ip and local listener ip) should be pingable from application server ?
    and also the application users says
    "We are sometimes able to connect, but when we try 5,6 times we hardly able to connect once
    as i am new to rac please help to understand .
    Thanks
    PGR

  • Standby DB running on different hardware if production on Exadata v2

    We where looking to by Exadata v2 Database Machine, but what is stopping us are 2 things.
    As per Oracle:
    1) It is impossible to connect existing Fiber storage to Exadata V2 to offload archived data from primary storage.
    2) Standby database can't be built on different hardware (same OS and same DB version) except exadata V2 Database Machine.
    We, probably, could survive with #1 limitation, but buying second Exadata V2 database machine is too much, especially for DR side.
    Is any one has experience with this problems, or knows some Doc to answer this 2 questions.
    Thanks in advance.

    Yes it is true if you are not using Exadata hybrid columnar compression features.
    "With Oracle Database 11g Release 2, the Exadata Storage Servers in the Sun Oracle Database Machine also enable new hybrid columnar compression technology that provides up to a 10 times compression ratio, with corresponding improvements in query performance. And, for pure historical data, a new archival level of hybrid columnar compression can be used that provides up to 50 times compression ratios."
    When you enable this feature, you can't build standby database on different hardware. It won't work.
    I am still researching what else could be a stopper or I could say, which other Exadata V2/11gR2 features I should avoid to have standby database working on non Exadata V2 hardware?

  • Oracle Client 32 bit installation on Exadata Machine

    Hi,
    We are starting our migration to exadata next month.
    One of the issues we have is regarding to our Informatica ETL tool . Our application is licence to 32 bit.
    The database repository of this tool is currently running on linux redhat 5.5 64 bit with 11203 rdbms version.
    We had to install oracle client 32 bit software , in order to allow the application to connect to the database.
    Is it possible to install oracle client 32 bit on exdata ? If not what would you suggest ?
    Best Regards

    I can't speak for Informatica, but it should be able to connect to the database over the network, so the database server OS wouldn't matter in that case. That is, if Informatica runs on a different Linux machine that runs 32-bit Linux, it can connect over the network to the Exadata database server node. If Informatica requires to run on the database server directly, you'd have to ask them how they can support 64-bit Linux (or you may have to modify or add to your license).

  • Exadata performance

    In our exachk results, there is one item for shared_server.
    our current production environment has shared_server set to 1. shared_server=1.
    Now I got those from exachk:
    Benefit / Impact:
    As an Oracle kernel design decision, shared servers are intended to perform quick transactions and therefore do not issue serial (non PQ) direct reads. Consequently, shared servers do not perform serial (non PQ) Exadata smart scans.
    The impact of verifying that shared servers are not doing serial full table scans is minimal. Modifying the shared server environment to avoid shared server serial full table scans varies by configuration and application behavior, so the impact cannot be estimated here.
    Risk:
    Shared servers doing serial full table scans in an Exadata environment lead to a performance impact due to the loss of Exadata smart scans.
    Action / Repair:
    To verify shared servers are not in use, execute the following SQL query as the "oracle" userid:
    SQL>  select NAME,value from v$parameter where name='shared_servers';
    The expected output is:
    NAME            VALUE
    shared_servers  0
    If the output is not "0", use the following command as the "oracle" userid with properly defined environment variables and check the output for "SHARED" configurations:
    $ORACLE_HOME/bin/lsnrctl service
    If shared servers are confirmed to be present, check for serial full table scans performed by them. If shared servers performing serial full table scans are found, the shared server environment and application behavior should be modified to favor the normal Oracle foreground processes so that serial direct reads and Exadata smart scans can be used.
    Oracle lsnrctl service on current production environments shows all 'Local Server'.
    What should I proceed here?
    Thanks again in advance.

    Thank you all for your help.
    Here is an output of lsnrctl service:
    $ORACLE_HOME/bin/lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 14-JUL-2014 14:15:24
    Copyright (c) 1991, 2013, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:1420 refused:0 state:ready
             LOCAL SERVER
    Service "PREME" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREMEXDB" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "D000" established:0 refused:0 current:0 max:1022 state:ready
             DISPATCHER <machine: prodremedy, pid: 16823>
             (ADDRESS=(PROTOCOL=tcp)(HOST=prodremedy)(PORT=61323))
    Service "PREME_ALL_USERS" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_TXT_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CORP_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_DISCO_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_EAST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM_WR" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_RPT" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_WEST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    The command completed successfully

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • Potential Versioning Differences Between Storage Server and Grid Infrastructure/DB

    Hello.
    While awaiting my MOS CSI access to be approved For Exadata HW issues so I can open an SR, I have a question.  We are about to apply QFSDP for BP 22 on one of our full RACK X3 Exadta servers.  Currently, the Grid Infrastructure (GI)/DB SW is at 11.2.0.3.0 and teh storage SW is at 11.2.3.2.  Our last BP was 19.  BEFORE we apply the BP 22, we want to upgrade the clusterware to 12c - but not the DB (that will come later).  This is a high level question which I have discussed with several Oracle techs, but wanted to see what the experience/opinion is while I await access to my CSI.  Can you have your Grid Infrastructure (GI) running at a higher version than that of your your storage server?  The tech I spoke to said that all GI/DB SW should be at a version that is NO GREATER than that of the storage server - meaning a 12c version of the GI would not be compatible with a lowere version of the storage server.  Thank you in advacne for any guidance.
    Regards,
    Matt

    Hi Matt,
    Seems you want to do a lot in one go here.
    Maybe you should look at Doc ID 888828.1, Doc ID 1537407.1 and Doc ID 1373255.1 for a start.
    For 12c (grid and rdbms) on Exadata it is recommended to upgrade your FW to 12.1.1.1.0 first.
    My advice would be to patch up 11g to BP22 and when done start planning the 12c upgrade (keeping the 11g dbhome).
    Regards,
    Tycho

  • Ceritified or not: DB on exadata, AP on Solrais on SPRAC (64 bit)

    Please confirm whether the following split configuratoin of EBS 12.1.1 is supported.
    EBS DB Tier: Oracle Exadata with Linux and X86-64 Architecture
    APP Tier: Oracle Solaris on SPARC (64-bit)

    Thank you all.
    One more question:
    Note 986673.1 General Notes For E-Business Suite Release 12 states:
    11gR2 11.2.0.1 certification is the minimum requirement for a E-Business Suite version to run on Exadata V2.
    So only EBS database version higher than 11gR2 is supported to run on exadata?

  • Oracle Database migration to Exadata

    Dear Folks,
    I have a requirement to migrate our existing Oracle Database to Exadata Machine. Below is the source & destination details:
    Source:
    Oracle Database 11.1.0.6 Verson & Oracle DB 11.2.0.3
    Non-Exadata Server
    Linux Enivrionment
    DB Size: 12TB
    Destination:
    Oracle Exadata 12.1
    Oracle Database 12.1
    Linux Environment
    System Dowtime would be available for 24-30 hours.
    Kindly clarify below:
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    2. Any upgarde activity after migration?
    3. Which migration method is best suited in our case?
    4. Things to be noted before migration activity?
    Thanks for your valuable inputs.
    Regards
    Saurabh

    Saurabh,
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    This would help if you wanted to drop the database in place as this would allow a standby database to be used which would reduce downtime or a backup and recovery to move the database as is into the Exadata.  This does not however allow you the chance to put in some things that could help you on the Exadata such as additional partitioning/adjusting partitioning, Advanced Compression and HCC Compression.
    2. Any upgrade activity after migration?
    If you upgrade the current environment first then not there would not be additional work.  However if you do not then you will need to explore a few options you could have depending on your requirements and desires for your exadata.
    3. Which migration method is best suited in our case?
    I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  At a high level I typically have recommended when moving to Exadata that you setup the database to utilize the features of the exadata for best results.  The Exadata migrations I have done thus far have been done using Golden Gate where we examine the partitioning of tables, partition the ones that make sense, implement advanced compression and HCC compression where it makes sense, etc.  This gives us an environment that fits with the Exadata rather then a drop an existing database in place though that works very well.  Doing it with Golden Gate eliminates the migration issues for the database version difference as well as other migration potential issues as it offers the most flexibility, but there is a cost for Golden Gate to be aware of as well so may not work for you and Golden Gate will keep your downtime way down as well and give you opportunity to ensure that the upgrade/implementation will be smooth by giving some real work load testing to be done..
    4. Things to be noted before migration activity?
    Again I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  In short here are some items that may help keep in mind exadata is a platform that has some advantages that no other platform can offer, while a drop in place does work and does make improvements, it is nothing compared to the improves that could be if you plan well and implement with the features Exadata has to offer.  The use of Real Application Testing Database Replay and flashback database will allow you to implement the features, test then with a real workload and tune it well before production day and allow you to be nearly 100% confident that you have a well running tuned system on the Exadata before going live.  The use of Golden Gate allows you to get an in Sync database run many replays of workloads on the Exadata without losing the sync giving you time and ability to test different workload partitioning and compression options.  Very nice flexibility.
    Hope this helps...
    Mike Messina

Maybe you are looking for