Exadata - Storage Cells

We had two flash disks go offline on a storage server!
We restarted the storage cell on Friday. The physical disks never came back online. They are still showing as syncing or offline. ASM has dropped the disks.
2 Of the DB's are not even coming up. Please advise.

Vishal Gupta wrote:
David,
You may want to apply the fix mentioned in 1351559.1 until your Exadata storage server software is upgrade from 11.2.2.2.0 to 11.2.2.3.5. It will avoid the problem of two of flash cards on the same PCIe riser (ie. either 1 and 4 or 2 and 5) going offline at the same time on cell nodes.
Regards,
Vishal GuptaVishal,
I took the contents of the note to mean that we still need the fix, even with 11.2.2.3.5. From the note:
>
Apply this solution to systems running Exadata Storage Server software 11.2.2.3.5 and earlier.
Systems patched to 11.2.2.3.5 that incorporated the 12832832_12886507_12413272.tbz workaround specified in Note 1334254.1 already have this in place. To verify, run the "service fixpciidt_12886507 status" command as described below.
This solution will be incorporated into the next Exadata Storage Server software release.

Similar Messages

  • Do we need to backup OS (file systems) on Exadata storage cells?

    We got confusing messages about if we need to or not. I'd like to hear your opinons.
    Thanks!

    Hi,
    The answer is no.
    There is no need to backup the OS of the storage cell.
    Worst case a complete storage cell needs to be replaced. A field engineer will open your broken storage cell and take out the onboard USB and put it inside the new storage cell.
    The system will boot from this USB and you can choose to implement the 'old' configuration on the new cell.
    Regards,
    Tycho

  • Exadata upgrade with Exadata Storage Expansion

    Hi everybody,
    I was reading about upgrade exadata, but i can't find any information about upgrade exadata with Exadata Storgae Expansión. In this case, does Exadata Storage Expansion require some upgrade? or only Exadata software is upgraded?
    any information will be apreciated.
    regards.

    Hi,
    >> but i can't find any information about upgrade exadata with Exadata Storgae Expansión. In this case, does Exadata Storage Expansion require some upgrade? or only Exadata software is upgraded?
    Yuu are trying to club two things 1) Upgrade and 2) adding more space (i.e. adding a new cell)  to the exadata.
    For upgrade, you can refer "Information Center: Upgrading Oracle Exadata Database Machine" (Doc ID 1364356.2)
    You can only add a certain numbers of storage cells to the existing machine. For more information on available storage and more you can add refer following notes:
    http://www.oracle.com/technetwork/database/exadata/exadata-technical-whitepaper-134575.pdf
    How to Add Exadata Storage Servers Using 3 TB Disks to an Existing Database Machine (Doc ID 1476336.1)
    HTH,
    Pradeep

  • Storage cells: System Events and System Defined Thresholds

    Hi, there,
    I have an Exadata V1 (yes, I do) and I recently saw this flag up as an alert on one of my storage cells:
    2 2013-02-19T17:15:50-05:00 critical "39:Temperature:Threshold based -- Upper Non-critical going high State: asserted"
    3 2013-02-19T17:15:50-05:00 critical "255:System Event:System Event -- PEF action : 7 State: asserted"
    4 2013-02-19T17:15:51-05:00 critical "39:Temperature:Threshold based -- Upper Non-critical going high State: deasserted"
    Basically, for one second, it appears that the temperature threshold was exceeded. This doesn't surprise me as the V1 machine is: a) very tightly compressed into a small box, b) is old, c) is HP.
    Because of a bug in our version of the ESS, we just use the system-defined thresholds on the storage cells. I have two questions:
    1) How can I find out the system-defined thresholds - or at least the thresholds that the storage cell is using
    2) How can I find out what the 'System Event - PEF action: 7' is?
    Mark

    hmmm... I had an issue with an Exadata X2 some time ago when a user-defined temperature threshold was not giving alerts (later documented in MOS note 1380758.1/bug 13387575 and fixed in 11.2.3.1.0 by the way). At the time I was told that there's no way to see build-in thresholds in cellcli.
    As far as the specific error ID goes I don't know what it means, but I'd suggest checking HP iLO documentation in addition to regular Oracle channels.
    Marc

  • Other DB hardware + Exadata Storage?

    Lets say I have a number of database machines on say, Dell hardware using an EMC SAN. Can I:
    1) Swap out the EMC SAN for an Exadata Storage Server?
    2) Run non-11gR2 (10.2, 11.1) databases against an Exadata Storage Server, knowing that things like Smart Cache simply won't work?
    3) Store data from non-Oracle applications on an Exadata Storage Server? Lets say I have a bunch of MySQL databases on other servers, can I point them at Exadata for storage?
    Thanks,
    Tyler Muth
    http://tylermuth.wordpress.com
    [Applied Oracle Security: Developing Secure Database and Middleware Environments|http://sn.im/aos.book]

    Tyler wrote:
    Lets say I have a number of database machines on say, Dell hardware using an EMC SAN. Can I:
    1) Swap out the EMC SAN for an Exadata Storage Server?
    2) Run non-11gR2 (10.2, 11.1) databases against an Exadata Storage Server, knowing that things like Smart Cache simply won't work?
    3) Store data from non-Oracle applications on an Exadata Storage Server? Lets say I have a bunch of MySQL databases on other servers, can I point them at Exadata for storage?1) You can connect Exadata storage servers to existing database servers. Infiniband infrastructure is required and in custom deployments (non-DB Machine), would have to be provided by the customer.
    2) Exadata V2 requires DB 11gR2.
    3) No. Exadata storage is available to databases via ASM diskgroups.
    Dan

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • Simulating Oracle I/O for storage layer testing

    There's an Open Source utility called Flexible I/O (called fio ) that is used by kernel and driver developers to test I/O.
    We would like to use this to create a very typical Oracle I/O load (async I/O, 8kb block reads, etc) for fio. The idea is that we can use this to test specific storage driver versions and driver parameters and kernel configurations options to determine stability and robustness and of course, performance - without having to deal with installing and setting up the Oracle software layer. We can also test new driver releases using this, without having to go to the effort of duplicating an Oracle instance and database and workloads on that database.
    It will also enable us to provide this as a test harness to storage vendors and driver developers for simulating a typical Oracle I/O load - and should trigger the same problems and issues that Oracle would if it was doing the I/O.
    Feasible? Or are there potential issues with this approach to be aware of?
    Any ideas what Oracle uses for testing I/O - like with Exadata Storage Cells and OFED drivers for example? What are other shops using to test storage systems and drivers and so on (there can be a number of moving parts on the storage layer and a test harness for this make sense)
    Any comments as to what fio parameters should be used to represent typical Oracle I/O?
    Will appreciate input on this. Thanks.

    Our intention is not really benchmarking - it is testing technical aspects of the I/O fabric layer. For example. fio can be used using shared memory, private, memory, huge pages and so on as the memory buffer for its I/O. I've seen kernel panics with huge pages and loads of I/O - so this can be tested for a specific lernel and driver version combo. What about scatter and gather (sg) reads - does Oracle use this method? The idea is to simulate the exact type of I/O that Oracle does, push it, and determine how well the I/O subsystem holds up.
    So we're not really looking at simulating database I/O, we're looking at simulating what happens lower down when the database makes an I/O call. Do we need to set the tablesize for sg reads? Does hugepages impact stability? But this will only be useful data if the actual I/O calls used matches very closely those made by Oracle.

  • Enable DOP on ExaData

    I need step to enable Degree of parallel on Exadata, i research in oracle support but i did not find documnet.
    the current database using for OLTP and there are two tables having a lot of data, and lot of report generated by using query dependent in this tables.
    i need best practice of parallel, what is best is to using Auto or Limited for this two table, and what is step to do this

    I've spent a lot of time working with and testing parallelism on Exadata. The best Oracle documentation on parallelism is the VLDB and partitioning guide Marc already mentioned. But a few other things to help guide you with using parallelism specifically with Exadata:
    Parallel execution, at the database level, works the same on or off Exadata. However, even without a DOP on the database, Exadata inherently uses parallelism, when properly configured, through the way ASM stripes data across all grid and cell disks on each of the Exadata storage cells allowing the cell CPUs and disks to all work together – splitting the load across DB and Cell CPUs. This allows lower degree of parallelism on Exadata to achieve optimal performance.
    I would recommend being very cautious with using Auto DOP (parallel_degree_policy) set to Auto or Limited. I've had very mixed results with it in testing and prefer to leave it at the default manual setting and enable DOPs, where needed, manually. I've only tested Auto DOP with 11gR2 though, not yet with 12c; so it may work better on 12c. You can enable parallel statement queuing, via the _parallel_statement_queuing hidden parameter, even without setting Auto DOP; and I do like this feature. If you're going to use this study up on it, test it and learn how to control it with the parallel_server_target and parallel_max_servers parameters.
    Parallelism can be a great performance boost - especially on Exadata - because it can help drive smart scans. But it can also overwhelm any system, including Exadata, if left unchecked. I recommend the following to enable and control it:
    For larger tables you feel would benefit from parallelism on all queries, set the DOP on the table itself, i.e. ALTER TABLE [TABLE NAME] PARALLEL [DOP];
    Parallelism on small tables will hurt performance so only enable on larger tables.
    Test to determine the best parallel degree - you'll eventually get diminishing returns as you go up in DOP
    If you don't want all queries against a table to be parallel - then use the parallel hint in the queries you do want.Be careful when using the hint... specify the table name(s) in the hint if the query has joins to make sure you only parallelize the larger tables that need it and not the smaller tables
    Control parallelism using DBRM resource plans by setting parallel degree limits and max % targets - this is very important to not let parallel queries overwhelm a system.
    Set the parallel_max_servers and parallel_min_servers appropriately.
    Use MOS note 1274318.1 for Exadata best practices on these parameters
    Set parallel_min_servers to a high daily average of your concurrent parallel processes, as this will reduce overhead in constant spawning of new parallel processes.
    Test out parallel statement queuing; see point on this above. This can deliver more consistent parallel performance as it can help not kick off to many parallel processes but still by queuing for short time can often perform much better than serializing the statement
    Test, test , test!Monitor using Grid Control and SQL Monitor - you want to find the balance between not realizing the performance benefit by keeping things too throttled and overwhelming system resources and having performance degradation by allowing too many parallel processes
    HTH.
    -Kasey

  • 11.2 Flash Cache feature

    I an 11.2 RAC database running.
    I tried to enable Flash Cache
    db_flash_cache_file
    db_flash_cache_size
    when I tried to start the database i get these errors
    ORA-00439: feature not enabled: Server Flash Cache
    ORA-01078: failure in processing system parameters
    Does anyone know how to enable the feature?
    Daniel

    i wonder if Oracle is using ordinary SSD in Exadata Storage Cell to enable the smart flash cache feature. Has anybody seen an Exadata V2 and can report if there are SATA/SAS - SSDs with an ordinary hard disk controller build into the storage cell or if they do it a somehow other way. All i have read is that these flash devices are not filled by a LRU algorithm. I am really interested in technical details.http://www.oracle.com/technology/products/bi/db/exadata/pdf/exadata-technical-whitepaper.pdf
    Each Exadata cell comes with 384 GB of Exadata Smart Flash Cache. This solid state storage delivers dramatic performance advantages with Exadata storage. It provides a ten-fold improvement in response time for reads over regular disk; a hundred-fold improvement in IOPS for reads over regular disk; and is a less expensive higher capacity alternative to memory. Overall it delivers a ten-fold increase performing a blended average of read and write operations.
    The Exadata Smart Flash Cache manages active data from regular disks in the Exadata cell – but it is not managed in a simple Least Recently Used (LRU) fashion. The Exadata Storage Server Software in cooperation with the Oracle Database keeps track of data access patterns and knows what and how to cache data and avoid polluting the cache. This functionality is all managed automatically and does not require manual tuning. If there are specific tables or indexes that are known to be key to the performance of a database application they can optionally be identified and pinned in cache.There look to be two important advantages over regular SSD attached to an Oracle database here: that the caching is in combination with the Oracle Database (not just LRU, just as you point out yourself), but also that the storage cell's SmartScan technology will only project the required rows and columns to the database server so that higher effective bandwidths are achieved.

  • Cannot enable IORM on OEM , kept getting:Resource cannot allocated for a database

    Oracle Exadata X4, trying to allocate IORM through OEM 12c.
    After chose the database, disk io allocation % and io allocation limit %, when I click update, it kept giving me this error: Resource cannot allocated for a database
    I want to know if IORM is already setup or not, if not , how do I get it set?
    Thanks in advance.

    Hi user569151 -
    I have configured IORM from the command line, cellcli, a number of times but haven't used OEM to set it up. IORM plans are setup on the Exadata storage cells and unlike its counterpart, DBRM, it manages workloads across databases not internal to a single database.
    To see the current IORM plan and see if it is active and has an objective - which are required for IORM to start managing IO resources and before any inter-database or category plans can be created - you can use the following cellcli command:
          cellcli -e list iormplan detail
    This can be executed from the linux command line on each of the storage cells... or even better, if you have dcli setup you can execute it for all your storage cells at once from the linux command line of one of your compute servers, e.g.:
         dcli -g cell_group cellcli -e list iormplan detail
    You can then determine if you need to activate it, set an objective and then can look into creating your inter-database and/or category IORM plans. Look at the following MOS notes for some information on creating IORM plans:
    Configuring Exadata I/O Resource Manager for Common Scenarios [ID 1363188.1]
    Configuring Resource Manager for Mixed Workloads in a Database [ID 1358709.1]
    Hope that helps get you started. Good luck!
    -Kasey

  • Join 25 M with 200 M - Query Issue

    I have 2 tables, CUSTOMER and CUSTOMER Transaction. CUSTOMER has 25 M records and CUSTOMER TRANSACTION has 200 M records. I need to get the count from CUSTOMER TRANSACTION divided by transaction type. below is my tables.
    CUSTOMER
    ========
    CUST_ID          LOC_ID          (OTHER COLUMNS)
    12345          001
    23456          002
    67890          910
    54298          789
    16754          909CUSTOMER TRANSACTION
    ====================
    CUST_ID          LOC_ID          TRANSACTION_TYPE     DATE          (OTHER COLUMNS)
    12345          001          CREDIT               01-jan-01
    12345          001          DEBIT               02-jan-02
    12345          001          CHEQUE               03-jan-03
    12345          001          CASH               04-jan-04
    12345          001          CASH               05-jan-06
    12345          001          CASH               11-feb-11
    54298          789          CREDIT               01-jan-01
    54298          789          CREDIT               02-jan-01I need to have below output
    CUST_ID          LOC_ID          CREDIT     DEBIT     CASH     TOTAL
    12345          001          1     1     3     5
    23456          002          0     0     0     0
    67890          910          0     0     0     0
    54298          789          2     0     0     2
    16754          909          0     0     0     0
    54298          789          0     0     0     0
    SELECT C.CUST_ID, C.LOC_ID, CT.TOTAL, CT.CREDIT, CT.DEBIT, CT.CASH
    FROM
    CUSTOMER C LEFT OUTER JOIN
    select      CUST_ID,
         LOC_ID,
         COUNT(*) TOTAL,
         SUM(CASE WHEN TRANSACTION_TYPE = 'CREDIT',1,0) CREDIT,
         SUM(CASE WHEN TRANSACTION_TYPE = 'DEBIT',1,0 END) DEBIT,
         SUM(CASE WHEN TRANSACTION_TYPE = 'CASH',1,0 END) CASH
    FROM      CUSTOMER_TRANSACTION
    GROUP BY CUST_ID, LOC_ID
    ) CT
    ON C.CUST_ID = CT.CUST_ID and C.LOC_ID = CT.LOC_IDNow my CUSTOMER table itself is joined with other 5 tables, with left outer join to get other information.I am joining the count as above. But i am facing severe performance issue, as it does group by on 200 M records in CUSTOMER_TRANSACTION and then joining this result set with CUSTIMER table using left outer join.
    Can some one help me on how to write efficient query?

    ace_friends22 wrote:
    The problem with this query is, it takes lots of time to reqturn the result set.Of course. You cannot expect it to be fast - not with the amount of I/O that needs to be done.
    The simple and harsh fact is that the more data there is for the SQL to crunch, the slower it will be, as I/O is the slowest and most expensive operation on a database.
    I just wanted to know if there is better way of writing this qyery?There's 2 basic ways to address this. Do less I/O. Do smarter I/O.
    Doing less I/O means using optimal I/O paths to get to the relevant data. Like indexes. Or partitions. Or both. And ensuring the CBO comes up with a sane execution plan and not one based on none or skewed statistics. Etc.
    Doing smarter I/O means trying to eliminate some of the I/O latency using parallel processing. Oracle supports both parallel DML and DDL. And (for example), instead of having a single process crunching 20 million rows, you could use 20 parallel processing each doing around a million rows - assuming of course you have the CPU capacity and 20 I/O intensive processes will not overload the I/O subsystem.
    More than that... get something like Oracle's Exadata storage cells that provide a 40Gb I/O fabric layer and a very intelligent storage server...
    And keep a firm grip on the realities of computing with regards your performance expectations.

  • How to pull out a storage usage report for exadata platform

    Hi anyone knows how to get a report of storage usage in an exadata using ZFS?... without the use of OEM

    Hi
    ZFS has it own OEM console to monitor the ZFS storage.
    Where as exadata machine is monitored from OEM Cloud control 12C .
    You can see Exadata storage report from OEM Cloud control 12c. .
    -Thanks
    -Arjun B

  • Is there a way to create different diskgroups in exadata?

    We have a need to have some other diskgroups other than +DATA and +RECO.
    How do we do that? Exadata version is x3.
    Thanks

    user569151 -
    As 1188454 states this can be done. I would first ask why is it you need to create additional disk groups than the data, reco and dbfs disk group created by default? I often see Exadata users question the default disk groups and want to add more or change the disk groups to follow what they've previously done on non-Exadata RAC/ASM environments. However, usually the data and reco disk groups are sufficient and allow for the best flexibility, growth and performance. One reason to create multiple disk groups could be for wanting to have different two different redundancy options for a data disk group - to have a prod database on high redundancy and a test database on normal redundancy for example; but there aren't a lot of needs to change it.
    To add disk groups you will need to also re-organize and add new grid disks. You should keep the grid disk prefix and corresponding disk group names equivalent. Keep in mind that all of the Exadata storage is allocated to the existing grid disks and disk groups - and this is needed to keep the necessary balanced configuration and maximize performance. So adding and resizing the grid disks and disk groups is not a trivial task if you already have running DB environments on the Exadata, and especially if you do not have sufficient free space in Data and Reco to allow dropping all the grid disks in a failgroup - because that would require removing data before doing the addition and resize of the grid disks. I've also encountered problems with resizing grid disks that end up forcing you to move data off the disks - even if you think you have enough space to aloo dropping an entire fail group.
    Be sure to accurately estimate the size of the disk groups - factoring in the redundancy, fail groups and reserving space to handle cell failure - as well as the anticipated growth of data on the disk groups. Because if you run out of space in a disk group you will need to either go through the process again of resizing all the grid disks and disk groups accordingly - or purchase an Exadata storage expansion or additional Exadata racks. This is one of the reasons why it is often best to stick with just the existing Data and Reco.
    To add new grid disks and disk groups and resize the others become very familiar with the information in and follow the steps given in the the "Resizing Storage Griddisks" section of Ch. 7 of the Exadata Machine Owner's guide as well as the information and examples in MOS Note: "Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)". I also often like to refer to MOS note "How to Add Exadata Storage Servers Using 3 TB Disks to an Existing Database Machine (Doc ID 1476336.1)" when doing grid disk addition or resize operations. The use case may not match but many steps given in this note are helpful as is discusses adding new grid disks and even discusses creating a new disk group for occasions when you have cell disks of different sizes.
    Also, be sure you stay true to the Exadata best practices for the storage as documented in "Oracle Exadata Best Practices (Doc ID 757552.1)". For example, the total number of griddisks per storage server for a given prefix name (e.g: DATA) should match across all storage servers where the given prefix name exists. Also, to maximize performance you should have each grid disk prefix, and corresponding disk group, spread evenly across each storage cell. You'll also want to maintain the fail group integrity, separating fail groups by storage cell allowing the loss of cell without losing data.
    Hope that helps. Good luck!
    - Kasey

  • How to verify that a host is having/running Exadata?

    Hi,
    How can I verify that a machine(unix/linux) has Exadata?
    Please help.
    Thanks

    It's the storage that's important. You can run a database on an Exadata DB servers that doesn't access Exadata storage, in which case Smart Scans etc... will be disabled. So you may want to check your asm diskgroups. They have an attribute that tells whether they reside on Exadata storage or not. You can use something like this query to show you that information.
    <pre>
    column exadata_storage for a20
    with b as (select group_number, value from v$asm_attribute where name = 'cell.smart_scan_capable')
    select a.name diskgroup, state, b.value Exadata_storage
    from v$asm_diskgroup a, b
    where a.group_number = b.group_number(+)
    and a.name like nvl('&diskgroup',a.name)
    order by 1
    SYS@SANDBOX> @exadata_diskgroups.sql
    Enter value for diskgroup:
    DISKGROUP STATE EXADATA_STORAGE
    DATA CONNECTED TRUE
    RECO CONNECTED TRUE
    SCRATCH MOUNTED TRUE
    SMITHERS DISMOUNTED
    STAGE MOUNTED TRUE
    SWING MOUNTED TRUE
    SYSTEM MOUNTED TRUE
    7 rows selected.
    </pre>

  • Where exadata stands per regards to availability, security, and easy to use

    Dears,
    I would like to have details on how exadata could ensure availability, security and storage performance. In other words I know that exadata represent both a database(11g2) and a hosting server for that database. How this hosting service and this configuration could be different from other existing configurations
    Thanks in advance
    Mohamed Houri

    >
    But when the system come pre-configured ready to be turned on day one, already configured and secured by the Oracle team, isn't this a problem initself? The Oracle team knows more about the configuration than
    the customer who has been delivred the exadata machine does?This is the reason there is documentation provided as part of the hand off. Any new system is, by definition, new and you'll know less about it until you start using it and get accustomed to its configuration and setup. This is the reason you shouldn't expect to put your new system into production immediately, but rather after you've tested it and become familiar with it.
    Another question also came to my mind: when there is a problem within this machine can the local DBA/storage/OS team solve the problem without refering to Oracle? Will this not break a support SLA?I'm not sure how you are using the term SLA, but fixing your system yourself doesn't violate any support agreement. Your SLA is your service agreement with your customer and you need to do what you need in order to meet that.
    That said, there are some things that you aren't allowed to do (i.e. installing software or making configuration changes on storage cells). But generally speaking, if your system is running normally and has some issue, the issue is more likely related to RAC or RDBMS and changing or fixing things there is within your control to remedy.

Maybe you are looking for

  • ALE: e-recruiting organizational structure transfer

    Hi friends, I want to transfer the organization structure or the organization plan with O,S,C object types from one SAP system to another SAP system. I have done the ALE configuration and using PAFL transaction to do the same. I would like to know th

  • How does one get a radio station listed in the iTunes Radio Tuner?

    Hi! I am CIO for Head On Radio Network, http://www.headonradionetwork.com We have a live 24/7 Liberal/Progressive political talk radio stream which may be found at http://server2.whiterosesociety.org:8000/HeadOnRadioHQ and we would very much like to

  • Media Manager - Prores to DV-Pal(offline) back to online?

    Hi, I'm using FCP 7. Have 5 streams of XDcam 1920 x1080 converted to prores for editing. My system can't handle multicam editing these five HD streams so I converted my footage to prores. With all of my prores seq synced I used Media Manager to Recom

  • Non-Valuated Items for Repair

    Hi Gurus, For In-House repair scenario, I am using RA (Return-Repair) SD document. With Inbound delivery, System is posting accounting document which is giving not required As per requirement, it should remain non-valuated only. How I can achieve thi

  • Shuffle wont charge - help?!?!

    I bought a new 2.0 usb/firewire card and have windows 2000 with service pack 4. I installed the current ipod software (1.1) and the latest itunes, yet when i plug my ipod in, it just flashes orange the whole time, and wont charge. ***? any clue as to